I need to be able to extract and transform data from a data source on a client machine and ship it off via a web service call to be loaded into our data store. I would love to be able leverage SSIS but the Sql Server licensing agreement is preventing me from installing Integration Services on a client machine. Can I just provide the client copies of the Integration Services' assemblies to be referenced by my app? Does anyone have any ideas on how to best implement a solution to this problem apart from building a custom solution from the ground up? Ideally the solution would include leveraging an existing ETL tool?
Thanks for your suggestions.
If you are providing your client with a service around their data, you should develop a standard that they need to deliver their data in, and negotiate a delivery method for that file well before you ever consider what to do with SSIS. Since from comments it appears that your data is on a machine in a client's remote location, the most common method I have seen is either having the client SecureFTP a file into your network for processing, or to have a job on your end that gets the file using SecureFTP. Once you have the file on your network, writing the SSIS to process it is trivial.
If the server can reach out to the client machine, then you can just run the SSIS package on the server. What kind of data are you moving? If it's a flat file, you could FTP it to the server.
Another way to go about this is to use BCP. I'm not a big fan of this approach (SSIS is much faster, more robust, etc), but it can work in a pinch.
http://msdn.microsoft.com/en-us/library/ms162802.aspx
Related
I'm currently looking at metadata solutions for Microsoft SQL Server 2016. I'm interested in using Schemaspy. I think it looks like a great free way to document metadata, but I'm worried that it may not be secure for use in a company with sensitive data.
Question: Is Schemaspy secure to use? Can it be used as a loophole to hack into servers?
Thank you very much in advance!
SchemaSpy is solution that can be used with sensitive and confidential data.
Application code itself is not reading the database data but database structure and metadata from jdbc driver.
SchemaSpy is run as local program and everything what application is generating happen only on your computer and stay there.
In the code base we are not using any external servers or web services in the process of generating and preparing database documentation.
You can look on generated example by SchemaSpy: http://schemaspy.org/sample/index.html
I have a JSON file of data which I have pulled from an API and I would very much like to just dump this data into an SQL Server.
The reason it's SQL Server specifically is that the database is already in place for the current project. I have spent time googling this and searching on here but was unable to find anything useful thus far. I'm familiar with Python but I'm open to any solution.
TLDR: I'm interested in which languages and packages provide easy solutions to automate JSON to an SQL Server table, do you have any suggestions or know of any packages that already achieve this?
You can use something like SSIS to accomplish this (you may already have it) by writing a script task. This could do custom parsing then load it into the correct table. This can be easily automated. I mention SSIS because it's very easy to add future tasks to this, if you're ever required.
Alternatively you could create a script outside of the database (ie. Python) that parses the JSON, connects to the database through ODBC/OLEDB and writes the records. This can be automated using Task Scheduler or something similar. An example implementation of this could use PYODBC.
you can use WCF Web Service, sending json data to SQL Server .
refer the links below hope it will be helpful for you
http://www.codeproject.com/Articles/167159/How-to-create-a-JSON-WCF-RESTful-Service-in-sec
http://mikesknowledgebase.azurewebsites.net/pages/Services/WebServices-Page2.htm
you cant directly fetch json data's in sql server instead you can use wcf service
Ok, let me explain the environment we are facing here:
We have an ASP.NET MVC 4 app that uses a SQL Server database.
This app isolates data in "projects", so when any user connects to it only can work on the data of one of this projects.
Sometimes... a group of users have to travel to remote regions for some days to retrieve data for a single project, and quite often they won't be able to have an internet connection (even mobile or satellite solutions are often out of reach).
While the displaced team works on a project, people at the office still can work on the rest of the projects (but no on the one that is abroad).
So... we are pondering the possibility of using a laptop to act as a "mobile server", where users can download the data from a specific project before travelling. While abroad, they can work against this "mobile server", update any data on their project and, when they come back, they could upload their updated data to the main server.
Our idea is to create stored procedures on both servers (main and mobile) that executes different queries to update data from a project between them, passing the project identifier as a parameter. Probably using Linked servers to allow main and mobile to see themselves during update operations.
Our questions here are:
Is this a good aproach?
Is there any other better approach that we're not seeing?
Are there any risks we should pay attention to in this or other approachs?
I've never used Bidirectional transaction replication so if that works for you, problem solved. I do have quite a bit of experience with data migration, including merging large data sets into software driven systems. And from that experience, replication has hurt us more than it has helped us (from a migration/merge view).
The biggest challenge in my opinion is going to be conflict resolution. I know you say that all of the data is in project specific databases, but there is no shared data at all? What about multiple remote users updating the same data? In that case you're going to need a little more than just replication.
Instead of maintaining two databases at all times (one for mobile, one as the regular in-house db), why not a system where a job is called to your main system indicating that a project needs to be prepared for "offline mode" (the job could be stored procedures or SSIS packages or straight T-SQL). Whatever the technology used, this job would copy all of the requested project data to a new database on the remote server/laptop and mark it somehow in the main database as read-only to prevent users in the office from updating that data.
Once the data is in offline mode on the remote server, the users can update and use the data as much as they want from that remote server. Then when the users get an internet connection or they are back in the office they can kick off another job that syncs the data to the main server, removes offline mode, and deletes/archives the remote database. Almost like a temporary project database.
Seriously, it sounds like a fun project.
Technologies to look at:
SSIS (Sql Server Integration Services) - In my experience, this is extremely fast at moving data and allows you the ability to add logic to handle conflict resolution, error logic, etc. It's free (with certain Sql Server editions) and the community is huge so supporting it should be easy. SSIS is not as dynamic as some of the specialized solutions out there.
A data migration suite like Pervasive's Data Integrator - I loved this but it's expensive. You could right an entire solution in this product that could handle the processing of your data bidirectionally and like SSIS it allows for complex programming logic.
T-SQL - With a linked server you could just write straight queries (using stored procedures if you wanted). The problem here is security on the linked server. We don't use them because of this issue. Linked Servers: Good or Bad?
Start using some of Microsoft's built in change detection technologies right off the bat. It's harder to implement when you're already using the system. Change Data Capture (CDC) will give you a full history of the records updated while Change Tracking will give you a light-weight summary of your changes. Using either technology will make syncing the data many times easier.
Change Tracking: http://msdn.microsoft.com/en-us/library/bb933874.aspx
Change Data Capture: http://msdn.microsoft.com/en-us/library/cc645937.aspx
SSIS: http://msdn.microsoft.com/en-us/library/ms169917.aspx
SQL Server Agent Jobs: http://msdn.microsoft.com/en-us/library/ms189237.aspx
I have two applications with own database.
1.) Desktop application which has vb.net winforms interface, runs in offline enterprise network and stores data in central database [SQL Server]
**All the data entry and other office operations are carried out and stored in central database
2.) Second application has been build on php. it has html pages and runs as website in online environment. It stores all data in mysql database.
**This application is accessed by registered members only and they are facilitied with different reports of the data processed by 1st application.
Now I have to synchronize data between online and offline database servers. I am planning for following:
1.) Write a small program to export all the data of SQL Server [offline server] to a file in CVS format.
2.) Login to admin Section of live server.
3.) Upload the exported cvs file to the server.
4.) Import the data from cvs file to mysql database.
Is the method i am planning good or it can be tunned to perform good. I would also appreciate for other nice ways for data synchronisation other than changing applications.. ie. network application to some other using mysql database
What you are asking for does not actually sound like bidirectional sync (or movement of data both ways from SQL Server to MySQL and from MySQL to SQL Server) which is a good thing as it really simplifies things for you. Although I suspect your method of using CSV's (which I would assume you would use something like BCP to do this) would work, one of the issues is that you are moving ALL of the data every time you run the process and you are basically overwriting the whole MySQL db everytime. This is obviously somewhat inefficient. Not to mention during that time the MySQL db would not be in a usable state.
One alternative (assuming you have SQL Server 2008 or higher) would be to look into using this technique along with Integrated Change Tracking or Integrated Change Capture. This is a capability within SQL Server that allows you to determine data that has changed since a certain point of time. What you could do is create a process that just extracts the changes since the last time you checked to a CSV file and then apply those to MySQL. If you do this, don't forget to also apply the deletes as well.
I don't think there's an off the shelf solution for what you want that you can use without customization - but the MS Sync framework (http://msdn.microsoft.com/en-us/sync/default) sounds close.
You will probably need to write a provider for MySQL to make it go - which may well be less work than writing the whole data synchronization logic from scratch. Voclare is right about the challenges you could face with writing your own synchronization mechanism...
Do look into SQL Server Integration Service as a good alternate.
We upload sales transactions from our stores to the headoffice server. At the moment, we use DTS (SQL Server Data Transformation Services), but we’re planning on replacing that with Microsoft Sync services for ADO.NET, as this seems to be Microsoft’s preferred solution for this type of setup and we want to follow the standard (that will be hopefully be around for a long time).
Here are the details of our setup and what we’re planning. I’m looking for some advice, especially about whether Sync Services fits into our solution.
Situation
Each store has a 3rd party EPOS system which stores sales in a Microsoft Access 2000 database, which we can access. Our headoffice database is SQL Server 2005, but will be upgraded to 2008. The headoffice is not on a VPN with all the stores, but we can open up our firewall to the stores’ IP addresses, so that they can send data directly to SQL Server. The stores are always connected to the internet via ADSL, although they do lose connection and we don’t want to lose sales data.
We are only uploading transactions from the store – definitions do not need to be downloaded.
Current solution
We have written a Windows service that runs on the store PC. This service downloads a DTS package from the server (which contains all the details of the upload) and runs it in the store – and this will upload sales to our server.
We chose DTS, because it is free when you install MSDE. We can’t use SSIS, because that would require a SQL Server licence at every store.
Another reason we chose DTS is that the details of the upload (i.e. which tables and fields to include) are stored on our headoffice server, so if we need to change things we can do that centrally and don’t need to install anything new at the stores. This isn’t a showstopper, but would be nice to have this ability in our new solution.
Potential solution - Microsoft Sync services for ADO.NET
We are currently building a proof of concept with Microsoft Sync services for ADO.NET. The idea is to put SQL CE (SQL Server Compact 3.5) in each store (client) and sync that to the headoffice SQL Server 2005 database (server). We’ll get the data into the SQL CE database either by (1) syncing it with the Access 2000 database or (2) getting the EPOS system developers to write sales straight into the SQL CE database – probably (2). But our main concern is getting the data from the store to the headoffice server. This method seems to be Microsoft’s preferred solution for occasionally connected systems and that is what made us look seriously at Sync Services.
I’m hoping that using this will mean that most of the work needed to upload the sales will be built into Sync Services and we won’t have to re-invent the wheel.
Potential solution - Upload to a custom webservice
There is also the possibility of uploading the sales transactions to a custom web service on our headoffice server and then into our SQL Server database. This means that we will have to build our own mechanism for determining which rows are new, and as well as caching for when the systems are disconnected. Also, we might be missing out on other functionality that will come built into Sync Services.
Please let me know if you have any advice that will help, especially : “Is Sync services the right solution!”. The problem that we are trying to solve seems very generic (uploading sales from stores) – and I’d like to solve it with a generic solution.
Microsoft Sync services is more than you need, but it will certainly do what you want, and it was built with your type of application in mind.
As with most new technologies out of microsoft, (caution: generalization!) you may find that it's not as mature as you might like. It'll do what you need it to, but you may run into issues that aren't easily resolved because it hasn't been put through the ringer. As an early adopter, though, you may find that the Sync developers are eager to help you out when you get stuck, so this isn't as big a problem as it might seem.
Make sure you read through all the literature on it, some of which is here, or linked in the following sites:
http://msdn.microsoft.com/en-us/sync/default.aspx
http://msdn.microsoft.com/en-us/sync/bb887608.aspx
http://en.wikipedia.org/wiki/Microsoft_Sync_Framework
Given your one-way flow of information, though, and centralized layout I expect you should have few, if any, issues setting it up and using it.
Be sure to report your experience back here!
-Adam