I posted a similar post to this before but since then i have done some research and thought i'd put this one out again to see if anyone maybe has any thoughts on my problem.
I am running a WPF application with C# as the code beind. The datbase i am using is SQL Server 2005.
Now i am currently binding to the database using ADO.Net and retrieving the data from the stored procs in the db. This is then stored in datasets and further down the line bound to controls in my WPF application.
Now the data that i am binding to in the db is constantly changing, let say every few seconds. What i want is a mechanism to automatically tell me in C# when the data that i have bound to in the db, so the data returned from my stored procs has changed.
I looked on the web and found notification services and SQLDependency class but this is being deprecated. I also saw CLINQ but this doesn't seem to handle the database notification side, it only works for collections in c# from what i understand.
I mean the plan B method is to just have a thread in my C# code to poll the same stored proc every few seconds based on a timestamp that is stored on every row in the returned dataset. If my current timestamp is greater then what is being returned then retreive that data. This would then run in this new thread looping over and over. An if any data was returned from the connection in the thread that would mean that data has changed and so store that data into my collections in c# so they can be bound to my WPF app.
Obviously this is something i don't really want to do as i thought there maybe a smarter way to do this but i just can't seem to located anythings.
Does anyone have any ideas on this one?
Much appreciated
Iffy.
how about SQL CLR triggers? SQL triggers can be run every time an update/insert/delete occurs in targeted tables in your database and it seems CLR SQL triggers extend this functionality to managed code:
http://msdn.microsoft.com/en-us/library/938d9dz2(VS.80).aspx
not an expert on this but hopefully it will give you something useful to look at.
Plan B sounds like a good choice to me - there are no notification methods built-in to SQL Server to allow you to do this, so your only options would be:
Triggers combined with CLR user defined functions. The trouble is that depending on how much work you do in your user defined function this could add quite a lot of overhead for anyone writing to the database - this is not the most efficient layer to implement a notification mechanism in.
Implement some sort of notification mechanism and make sure that everyone who writes to the database uses this notification mechanism to notify others that the data has changed.
Considering how much effort #2 is, I'd say that plan B is a pretty good choice.
Depending on how frequently you poll and how efficient your polling is, you will probably find that the end effect for the user is just as good as option #2 anyway, and the overhead of doing that polling is probably less than you think. (if you have many users then you can use techniques like caching / a common polling thread to help out)
Related
EDIT: From the initial comments, it seems I may be barking up the wrong tree. I initially suggested using replication, but according to the networking team that is not possible because the two databases are on different virtual networks. It doesn't seem like this sort of thing would ever be "impossible", but I don't have the in-depth knowledge required to "fight back". Is there anything specific I should look into regarding SQL replication over the internet/different virtual networks, or points that I could bring up?
I've got an application that uses EF to save changes to an MS SQL database. For reasons, I need to be able to capture all changes made through the application and commit those exact DB changes to another, remote MS SQL db, using Kafka as the communication hub. The solution contains a ton of data entities, so ideally I'm looking for a generic way to capture these changes from the application and apply them to the remote DB where appropriate. The consuming application that is reading the messages from Kafka will also be applying the changes via EF (at least for my purposes).
So, is it possible to do something like this without having to strongly type everything for my hundreds of entities? What are the best practices for doing such a thing? Do I produce messages containing the new entity and the change state (added, updated, deleted)? I'm sure it's a bad idea for some reason, but what about serializing the entire EF ChangeManager and sending that, consolidating all of the Add/Update/Deletes and allowing us to make only a single call vs a bunch?
If I need to provide any more information please just let me know, I'll be monitoring this thread closely.
Currently, I'm working on a MERN Web Application that'll need to communicate with a Microsft SQL Server database on a different server but on the same network.
Data will only be "transferred" from the Mongo database to the MSSQL one based on a user action. I think I can accomplish this by simply transforming the data to transfer into the appropriate format on my Express server and connecting to the MSSQL via the matching API.
On the flip side, data will be transferred from the MSSQL database to the Mongo one when a certain field is updated in a record. I think I can accomplish this with a Trigger, but I'm not exactly sure how.
Do either of these solutions sound reasonable or are there more better/industry standard methods that I should be employing. Any and all help is much appreciated!
There are (in general) two ways of doing this.
If the data transfer needs to happen immediately, you may be able to use triggers to accomplish this, although be aware of your error handling.
The other option is to develop some form of worker process in your favourite scripting language and run this on a schedule. (This would be my preferred option, as my personal familiarity with triggers is fairly limited). If option 1 isn't viable, you could set your schedule to be very frequent, say once per minute or every x seconds, as long as a new task doesn't spawn before the previous is completed.
The broader question though, is do you need to have data duplicated across two different sources? The obvious pitfall with this approach is consistency, should anything fail you can end up with two data sources wildly out of sync with each other and your approach will have to account for this.
I have a task where I need to get inputs from user and perform some updates to the DB table. Now I knwo this is something needs to be done through a UI, but due to some limitations I have been assigned a task to do this through SSRS report. I know this is possible to do but is it a good practise to do updates or insert through a SSRS report?
First, SSRS is not designed for this kind of thing and, May be someone who has done this, I would advise you consider an Access Form against a SQL back-end, Sharepoint or an ASP.NET web form
I agree with Zaynul, this is not good practice. Interface wise, user expectation from reports is not that it causes updates or inserts, but retrieves data. Report automation tools (largely out of your control) like subscriptions could have this running in a way you don't want. It's probably being used as a substitute from writing an Interface element in a "program". Finally, lack of control of the parameter fields makes validating inputs more painful (compare with say, VB). If the drawbacks are unclear to you, you should probably avoid this path altogether.
If you need to do something like this bear in mind the drawbacks and take precautions
Confirmation parameters "Update? Y/N"
Preview what will be updated or inserted before allowing an update
Is there a way to "reverse" the update or insert
Is the change stored so there is an audit trail for changes done this way
Follow the same rules you'd use if you wrote a real interface/program for updates
We are running a pretty uncommon erp-system of a small it-business which doesn't allow us to modify data in an extensive way. We thought about doing a data update by exporting the data we wanted to change directly from the db and by using Excel VBA to update a bunch of data of different tables. Now we got the data updated in excel which is supposed to be written into the Oracle DB.
The it-business support told us not to do so, because of all the triggers running in the background during a regular data update in their program. We are pretty afraid of damaging the db so we are looking for the best way to do the data update without bypassing any trigger. To be more specific there are some thousands of changes we've done in different columns and tables merged all together in one Excel-file. Now we have to be sure to insert the modified data into the db and firing all the triggers the erp-software does during data update.
Is there anyone who knows a good way to do so?
I don't know what ERP system you are using, but I can relate some experiences from Oracle's E-Business Suite.
Nowadays, Oracle's ERP includes a robust set of APIs that will allow your custom programs to safely maintain ERP data. For example, if you want to modify a sales order, you use Oracle's API for that purpose and it makes sure all the necessary, related validations and logic are applied.
So, step #1 -- find out if your ERP system offers any APIs to allow you to safely update your data.
Back in the early days of Oracle's ERP, there were not so many APIs. In those days, when we needed to update a lot of table and had no API available, the next approach would be to use some sort of data loader tool. The most popular was, in fact, called "Data Loader". What this would do is read your data from an Excel spreadsheet and send it to the ERP's user interface -- exactly as though it were being typed in by a user. Since the data went through the ERP's UI, all the necessary validations and logic would automatically be applied.
In really extreme cases, when there was no API and DataLoader was, for whatever reason, not practical, it was still sometimes deemed necessary and worth the risk to attempt our own direct update of the ERP tables. This is, in general, risky and a bad practice, but sometimes we do what we must.
In these cases, we would start a database trace going on a user's session as they keyed in a few updates via the ERP's user interface. Then, we would use the trace to figure out what validations and related logic we needed to apply during our custom direct updates. We would also analyze the source code of the ERP system (since we had it available in the case of Oracle's ERP). Then, we would test it extensively. And, after all that, it was still risky and also prone to break after upgrades. But, in general, it worked as a last resort.
No my problem is that I need to do the work fast by make some automation in my processes. The work is already done on excel that's true but it needed the modification anyway. It's only if I put it manually with c&p into the db over our ERP or all at once over I don't know what.
But I guess Mathew is right. There are validation processes in the ERP so we can't write it directly into the db.
I don't know maybe you could contact me if you have a clue to bypass the ERP in a non risky manner.
I am looking for the easiest method to pull data from a WCF and load it into a SQL Server database.
I have a request for some data in our database. To learn EF I created a Entity Model based on a view that pulls the data (v_report_all_clients). I created the WCF service based on that model and tested it with a simple WPF datagrid. The WCF service exports a collection of v_report_all_clients objects which easily load into the simple data grid.
The user requesting the data is going to load the results into their database. I believe they have access to SSIS packages but I'm not sure using SSIS is the easiest method. I tried two approaches and neither were successful let alone easy. Briefly the methods I tried are:
Use Web Service Task - This works but unless I missed something you have to send the output to an XML file which seems like a wasteful middle step.
Use a Script Component - I am running into various issues trying to get this to work even after I follow some online examples. I can try to fight through the issues is this ends up being the best method but I am still not sure if this is the easiest. Plus the online examples didn't include the next logical step of loading into a database.
This is my first attempt to use a WCF as a means to distribute data from our database to various users. I'd like to leverage this technology for some of our larger web based reports that end up being almost an entire table dump. But if the users cannot easily integrate the WCF output into something they can use then I might have stick with web based reporting.
WCF services generally do not "contain" any data. From your description I am assuming the data you want to pull is probably contained in a SQL database somewhere on the lan/wan.
If it's in a database, and you want to put it in another database, then there is already a technology for doing that, it's called SSIS, which you're already using.
So my solution would be: don't use WCF for this task.
If you're going outside the local lan/wan for the data, then there are other ways to replicate the data locally.
Appreciate this does not answer your question as asked.
EDIT
Uncertain how database ownership "chaining" works - but suffice to say that either of the two methods you're proposing above (built in task or script task) both would appear to be eminently feasible. I am not a SSIS guru so would struggle to make a recommendation as to which one to use.
However by introducing a wcf service call into a ETL process you need to bear a few things in mind:
You are introducing an out-of-process call with potentially large performance implications. WCF is not a lightweight stack - opening a channel takes seconds rather than the milliseconds it takes to open a DB connection.
WCF service calls fail all the time. Handling timeouts and errors on the service side will need to be planned into your ETL process.
If you want to "leverage the service for other uses", you may need to put quite a lot of upfront time into making sure the service contract is useful, coherent, and extensible - this is time you would not have to spend going direct to the data.
Hope this has helped you.
I think the idea given by preet sangha is the best answer. I've experimented with a few possible solutions and I think a WCF Data Service is the easiest solution. After installing the OData add-on package for SSIS I was able to pipe the WCF Data Service directly to another table (or any other SSIS destination). It was very simple and easy in my opinion.
I think a second place vote might go to Web API which will give you more flexibility and control. The problem is that comes at a price of requiring more effort. I am also finding some quirks in Web API scaffolding.