How to monitor tables in SQL Server for changes - sql-server

This question was asked quite some time ago, and while it covers possible solutions for SQL 2005 and 2008, it lacks a good solution for SQL 2000, which is still far too common.
I need a way to monitor certain fields of a database table for changes, and notify my application when these changes occur so that I can blast them out on the local network as broadcast messages where anyone with a client can listen for them and display them as alerts (think something similar to stock market data reaching specific thresholds).
I do NOT want to poll the database for several reasons. 1) I don't wish to add additional load to the servers. 2) I would rather get notifications in near real-time rather than wait for the polling frequency to expire.
Now, I could put logic in the applications that update the database, but the data can be updated from several sources, including the web and I don't want to deal with web servers sending notifications across DMZ boundaries, etc.. And I don't want to have to maintain this in 20 different applications (the more overpowering issue).
I've seen this done on SQL 2000 using extended stored procs and triggers, but the xp's seem to be difficult to make cross platform, and they break when installed on SQL 2005 and 2008. Maybe that's just bad code in the examples i've seen, i'm not sure, but I am looking for something that works in SQL 2000 and later versions.
Any ideas?
EDIT:
I've thought about dropping support for 2000, but that really doesn't solve my problem. I would like a solution that is going to continue to work for years to come. One problem with many microsoft technologies is that they drop support for them. For instance, Notification services does what I need it to do, but they decided to deprecate that in 2008 and it won't be available in the next version. So i'm looking for a solution that has a good chance of sticking around.

Very simple solution
You could have a trigger that calls a webpage, notifying of an update.
This may be quite bad, because if the server can't get to the web, for some reason, it may make the insert operation quite slow. Also, depending on the frequency of inserts, it could be equally bad.
Alternative plan
In a trigger, write to a queue. (I happen to be in love with MSMQ). Then, have something waiting against that queue, and you will get the messages in 'real time'. Again, it's prone to the frequency of updates, as above.
Better plan
Have a trigger that posts the data to a 'tblUpdatedThings' table, which you then poll. But I know you don't want to poll. Regardless, I consider this better, due to the reasons I describe.

You want your solution to be in the database, but you want it to be database-independent. You can't have it both ways. Pick one. If you want to be independent of the database, don't allow the sources to write to the database directly, but to call a central service that you control, and where you can trap any events of interest to you.
If you want to use database functionality without polling, you have to deploy code that the database invokes, and you will have a dependency on future versions supporting your code.

Related

Transfer data between NoSQL and SQL databases on different servers

Currently, I'm working on a MERN Web Application that'll need to communicate with a Microsft SQL Server database on a different server but on the same network.
Data will only be "transferred" from the Mongo database to the MSSQL one based on a user action. I think I can accomplish this by simply transforming the data to transfer into the appropriate format on my Express server and connecting to the MSSQL via the matching API.
On the flip side, data will be transferred from the MSSQL database to the Mongo one when a certain field is updated in a record. I think I can accomplish this with a Trigger, but I'm not exactly sure how.
Do either of these solutions sound reasonable or are there more better/industry standard methods that I should be employing. Any and all help is much appreciated!
There are (in general) two ways of doing this.
If the data transfer needs to happen immediately, you may be able to use triggers to accomplish this, although be aware of your error handling.
The other option is to develop some form of worker process in your favourite scripting language and run this on a schedule. (This would be my preferred option, as my personal familiarity with triggers is fairly limited). If option 1 isn't viable, you could set your schedule to be very frequent, say once per minute or every x seconds, as long as a new task doesn't spawn before the previous is completed.
The broader question though, is do you need to have data duplicated across two different sources? The obvious pitfall with this approach is consistency, should anything fail you can end up with two data sources wildly out of sync with each other and your approach will have to account for this.

Replication vs Sync Framework vs Service Broker

I've asked about each of these technologies separately, and really haven't found a suitable answer.
We have a server in our central office running SQL Server 2005 Enterprise that has several large (large in the sense that DSL is the limiting factor) databases that we need local copies of at each of our locations. We currently have a few dozen locations, and are needing to bring even more online. The total number of locations we'll need to sync these databases to will be in the several hundreds in the next 2 years.
We are trying to overcome issues with the WAN connection at each location. These are DSL lines and the wiring at the locations isn't always the best. We currently have issues with some of the locations going down as often as every hour. While we are working to resolve these issues with rewiring and assistance from the local telcos, it mainly highlights the problem at hand: we need a two-way sync that can handle being occasionally-connected.
We tried transactional replication for a while, and while it worked some of the time, it was too high maintenance for us, and it seemed to randomly error out often with no possible explanations, forcing us to reinitialize subscriptions (which could take upwards of 4 hours assuming the location would stay connected long enough to get the entire snapshot in one go). We've looked at rolling our own solution from scratch, but I don't feel this would be the best idea given the scale and reliability we are needing.
So far we've also looked at Sync Framework, and as suggested by someone else, Service Broker. Sync Framework seems a better fit, but I was told that Service Broker scales better and is more reliable? I can't find any empirical data on the overhead involved with Sync Framework or Service Broker, so it's proving impossible to compare the two in this regard.
What we really need is a two-way sync between the central office server and a remote client that can run autonomously and can report to an admin in the event of a failure that requires our intervention.
There are so many possible solutions to this problem, all involving completely different technologies, that I need a fresh eye on this.
What do you think would be the optimal solution for our situation, and why?
EDIT: Obviously, upgrading to SQL Server 2008 would solve this problem easily. However, we would like to try to less expensive options first.
I don't have any hard data to offer on this, but we used the sync framework on a project a while ago. My experience with it is really bad. It's slow (even when synchronizing relatively small tables across a LAN), scales terribly and requires a lot of work to manually handle error conditions (it'll happy produce larger packets than WCF can handle by default -- and is only able to split updates into batches when syncing one way, not the other.) And it only works with a few select databases (the client must use MS SQL Compact Edition, as I recall), unless you're willing to write your own SyncAdapter.
Overall, a lot of work just to get a fragile and inefficient solution to your problem. I wouldn't recommend it.
You can Use sync framwork with SQL express 2008 R1/R2 on one end and multitenant db SQl server enterprise on central end. Below is the sample application for n-tier sync over secure WCF channel.You could write windows service to sync data from backend:
http://www.rajneeshnoonia.com/blog/2012/03/n-tier-sync-framework/
It sould be capable enough to handle large number of clients (thousands).
I think we'll look into the SQL Server 2008 upgrade route. It seems the native change tracking support will be the easiest way to accomplish this.

How do I convince someone they need to upsize from ms access to sql server or similar

I am having a real problem at work with a highly ingrained developer obsessed with ms access. Users moan about random crashes, locking errors, freeze's, the application slowing down (especially in 2007) but seem to be very resistant to moving it. Most of the time they blame the computer and can't be convinced it's the fact its a mdb sat on a network drive and nothing to do with the hardware sat in front of them which is brand new.
There is a front end vb program hanging off it but I don't think it would take more than a couple of weeks to adjust, infact I would probably re-write it as it has year on year messy code from a previous developer.
What are my best arguments to convince them we need to move it?
Does anyone else have similar problems with developers stuck in their ways?
how about the random, crashes, locking errors, freeze's, slow downs (sic).
A quick search on the web finds some useful materials:
Best Practices When Using Microsoft Office Access 2003 in a Multi-user Environment - if the changes here can't be implemented, or would effectively take a rewrite, then that is good ammunition for doing it right.
SQL Server vs MS Access - pay special attention to feature limits. Eg You can only have 32,000 objects in an access DB. Caveat: though it says 255 concurrent users, and that is probably a technical limitation, the practical limitation is really MUCH lower.
It's hard to convince people that are not willing to learn and are not open to new ideas. You can go on about speed issues, concurrency issues, security problems.. but ultimately, some people will just never listen. Go over their heads. Rewrite it in tools from this decade and show them up. Refuse to be involved with the project and further. I don't know what the political situation is, but technically, MS access is wrong for what you are doing, from what you've described.
come in on a weekend, copy the database to sql server, change the app's connect-strings to sql server, retest the application, then uninstall ms-access...everywhere.
then don't say anything about it, let him think that the problems 'fixed themselves' and that the users are still using ms-access
To me it depends on how many concurrent users you have and how big the database is. If you have more than 5 concurrent users then you should be thinking about a database server. The network traffic starts to get out of hand and with each concurrent user you add it just gets worse.
I have created reliable access based systems for years. If you are having random crashes, locking issues, and slow downs then you aren't doing something right. I typically will have an mda local with the mdb on the network when creating an app in access. To have good performance it's key to have the proper indexes and queries optimized for getting just the data you need. Whether using a separate app, access, or some app running against sql server you need to actively handle record locking properly. You can't just blindly let access lock your records.
Forget the arguments about DB Size, it is an uninformed reason to shift to a client-server platform in 90% of the cases I hear it brought up.
Your best arguments are based on features explained at a low tech level:
(1) You can backup and perform maintenance on the DB without kicking out the users (which introduces costly downtime).
(2) Faster recovery if data is accidentally deleted/mangled or corrupted. Again, less risk and less downtime. This is always a good foundation for a business case.
(3) If (and only if) you anticipate the need to scale quite a bit, the upgrade will better allow that.
(4) If you need to run automated jobs/updates, SQL can do this much more elegantly.
Remember the contra-indications for SQL, it is easy to get on your technical high-horse about this platform versus that, but you have to balance the benefits against the costs.
SQL is a Helluva lot more expensive to maintain as it requires dedicated hardware, expensive licenses (Server OS and DB) and usually at least a part time DBA that is going to cost you a bare minimum of $75K (if you get luck AND work out of Podunk Iowa).
The best possible advice I can give you is to make sure that you have a good attitude and are known as someone who does quality work and gets things done. It sounds like you don't have any control in the situation so what you need is influence.
Find a way to solve a problem (probably a different one that is less threatening to the people involved) in the way you are suggesting. Make it work blindingly fast and flawlessly. Make it work so well that people start asking for you when they need something done. Get it done quickly, which you should be able to do because you'll be using the right tools for the job.
Be a good person to work with, not the PITA that knows how everyone else should write their code. Be able to give an answer for what you might do differently and why, but don't automatically assume that your ideas are always the best. Maybe there are trade-offs that you don't know about -- no money in the budget for the extra CALs, we have this other app that needs to be done first. This doesn't sound like your situation, but looking for opportunities to understand before making constructive criticisms can go a long way to helping people be receptive.
The other thing is that this probably has nothing to do with the technical aspects of the situation and everything to do with the insecurities of the other developer. "This is all I know. If we change it, I won't understand it and then where will I be." Look for ways to help the other guy grow -- when he's having a problem, find resources that will help him develop good technical solutions. Suggest that everyone in your department get some training in new technologies. Who knows, one good SQL Server course and the guy could become the SQL Server evangelist in the organization because now THAT'S what he knows.
Lastly, know when to cut your losses, so to speak. If you find that you're not able to do anything about the situation, don't add to the complaining. Move on to something that you can control and do it as well as you can. Maybe in the future you'll be in a position that you do have control or influence in the situation and can do something about it. If you find that you're in a company that's more dysfunctional than most, find a way to move on to a place where the environment is better.
It is possible, and actually fairly easy, to convert an Access database to having the tables/views in SQL Server while still using the Access app as a front-end.
From there, your Access-obsessed developer can still have fun with all that VBA code. Meanwhile, on the back-end, you add indexes and such to speed everything up. Maybe someday you get lucky, and he asks about stored procedures. Then, the app is just a front-end, and who cares what it's written in? Your data is safe in SQL Server.
It is possible for you to do this yourself, but just leave the production app ALOOOOOOOOOOOONE. Take a copy, and convert that copy. Then, host it for a couple of users to TEST drive .. make your version of the Access app show "TEST APP" in big red letters. If your developer asks what you're doing, you can say the truth -- you are testing to see if converting only the tables/views might be of some help to the overall app.
This way, you get the best of both worlds, keep your developer happy, make the users happier (hopefully), and if you play it right, your bosses will know that you handled a knotty personnel issue with your technological prowess and your maturity.
I once had similar problems with someone I would not hesistate to call a complete idiot.
It was not possible to convince them of the issues with access. In the end it was easier to force the issue than do it "nicely", cruel to be kind.
If they resist then you can always go above their head. Management must be aware of crashes and stability related issues. Present a plan to them to improve stability and they are likely to at least listen. They will probably then want a meeting with all developers to discuss so go into it armed with plenty of ammo.
More than "How to convince them", let's talk about "How to do it without anybody noticing"!
First of all I advice you not to mix together the code optimisation issue and the SQL server one. Do not give users a chance to complain about SQL while bugs are related to something else.
If your code is really unbearable, rewrite the app before switching to SQL, keeping in mind the following points to make the final transition to SQL Server completely transparent for final users.
This is what we did 18 months ago, and I am sure we still have users thinking our database is Access:
Export current access database to SQL through available Wizard in access for testing purposes (many problems might occur, and you could need another tool such as the one proposed here).
Create a unique connection object at the application level, so that you can freely switch from Access to SQL at any time (at development level, you can even add an input box at startup to ask which connection to use). We chose an ADODB connection object, but it will also work with ODBC connection.
In case you use SQL syntax to update tables, make sure that all SELECTs, INSERTs, UPDATEs and DELETEs make use of this connection. In case you use recordset, make sure that all of them use this connection at opening time.
When needed, update all connexion specific code by adding a "SELECT CASE" type_Of_TheConnexion options
Switch to SQL connection ..and debug till you're done!
The problems you will find are mainly linked to SQL syntax, where MSSQL uses ' instead of " and # as separators. Date format is also an issue, where standard SQL format is 'YYYYMMDD' while MS-Access format depends on computer locals (beware of conversions from date to string!) and is stored as "YYYY-MM-DD" (if I remember ...). Boolean in SQL are 0 and 1, while they are True/False or 0/-1 in Access ...
Test, update code, and when you are ok, make a new data transfer, lock your app on the SQL connection, and distribute a new runtime.
It depends on the type of application and data load of your database but Access is quite efficient, even over the network.
Depending on the amount of data your users deal with you could easily scale up to a 100 users on a network just using a from and back-end Access database.
Looks like in your case a rewrite may be in order. If your application is data-centric if doesn't make much sense to develop it in VB6: the tools given by Access are much better than anything you'd be able to make, especially when considering Access 2007.
Upsizing to SQL Server is only really required if you're getting into issues of:
Security:
you need to make sure that only the rights users can access data. You can do your own security in Access, but it's never going to be as strong as SQL Server.
Scalability:
you're dealing with lots of data and complex queries or a lot of users and it would be better to have dedicated hardware to handle the load for the clients. The issue with this though is that while removing the pressure from the less-capable clients machines, you're adding a lot more to the server.
Integrity:
With the back-end database being just a file that needs R/W access for all connected clients, there's always the possibility that someone is going to do something bad or that a client may crash and leave the database corrupted.
If your number of users is average (I'd say 30), then there's probably no real need to upscale:
Use MS Access 2007 to develop your application, then just use the MS Access 2007 Runtime (it's free!) on all client machines to get a more modern user interface (uses the Ribbon and has lots of UI enhancements over previous versions).
You can't be the cheapness of that solution : you only need full retail version of MS Access and all the rest is free, regardess of the number of users!
Don't think that moving to SQL Server is going to improve performance of your queries: MS Access often does a better job of optimizing the queries for you (it knows what needs to be displayed and does lots of caching and optimization).
Make sure you only edit small amounts of data at any one time (don't use dynaset queries just to display vast amounts of data in a datasheet; use a snapshot instead and open a detail form that only contains the data to edit when necessary.
Cache complex queries locally.
Built some caching mechanism that leaves a copy of the results of a complex query on the local machine. The gain in performance is pretty amazing and if the query doens't change much (for instance a log of stock operations) you can just persist the complex/big query locally and append new records as necessary.
There is so much more to say.
Bottom line is: you may be looking at a rewrite, but don't dismiss Access as the solution because your current application was poorly written.
Try bechmarking and showing the stats to him
Making people change can sometimes be a real pain in the butt.
I would have to say the main argument would have stability and speed, but of course like you have said they already know this a still won't move.
Another thing to try would be to show them the power of LINQ to SQL and how much cleaner it would make your application. Like Daniel Silveira said you could try and throw a couple of stats there way and see if they are convinced.
We have a app build using MS access as a back end and I can't wait till we get our new SQL server so I can move everything to that.
You could show him the perf results comparing the two, but if he's really set in his ways and refuses to change, there isn't much you can do except force him somehow.
If you're his boss then just force him to change it to use SQL. If not, then convince your boss to force the change by showing him the perf results and explain it'll fix the issues you're having.
Errr, leave the team? You seem to be working with the totally wrong set of people. Now, if the team IS your company, then you are working with the wrong company.
Of course once you leave the company, you could tell your clients that you could solve the network problems on their own and make them leave the company as well. Then give them an improved system that works on SQL Server Express.

Mobile/PDA + SQL Server data synchronization

Need a little advice here. We do some windows mobile development using the .NET Compact framework and SQL CE on the mobile along with a central SQL 2005 database at the customers offices. Currently we synchronize the data using merge replication technology.
Lately we've had some annoying problems with synchronization throwing errors and generally being a bit unreliable. This is compounded by the fact that there seems to be limited information out there on replication issues. This suggests to me that it isn't a commonly used technology.
So, I was just wondering if replication was the way to go for synchronizing data or are there more reliable methods? I was thinking web services maybe or something like that. What do you guys use for this implementing this solution?
Dave
I haven't used replication a great deal, but I have used it and I haven't had problems with it. The thing is, you need to set things up carefully. No matter which method you use you need to decide on the rules governing all of the various possible situations - changes in both databases, etc.
If you are more specific about the "generally being a bit unreliable" then maybe you'll get more useful advice. As it is all I can say is, I haven't had issues with it.
EDIT: Given your response below I'll just say that you can certainly go with a custom replication that uses SSIS or some other method, but there are definitely shops out there using replication successfully in a production environment.
well we've had the error occur twice which was a real pain fixing :-
The insert failed. It conflicted with an identity range check constraint in database 'egScheduler', replicated table 'dbo.tblServiceEvent', column 'serviceEventID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.
When we tried running the stored procedure it messed with the identities so now when we try to synchronize it throws the following error in the replication monitor.
The row operation cannot be reapplied due to an integrity violation. Check the Publication filter. [,,,Table,Operation,RowGuid] (Source: MSSQLServer, Error number: 28549)
We've also had a few issues were snapshots became invalid but these were relatively easy to fix. However all this is making me wonder whether replication is the best method for what we're trying to do here or whether theres an easier method. This is what prompted my original question.
We're working on a similar situation, but ours is involved with programming a tool that works in a disconnected model, and runs on the Windows Desktop... We're using SQL Server Compact Edition for the clients and Microsoft SQL Server 2005 with a web service for the server solution.
TO enable synchronization services, we initially started by building our own synchronization framework, but after many issues with keeping that framework in sync with the rest of the system, we opted to go with Microsoft Synchronization Framework. (http://msdn.microsoft.com/en-us/sync/default.aspx for reference). Our initial requirements were to make the application as easy to use as installing other packages like Intuit QuickBooks, and I think that we have closely succeeded.
The Synchronization Framework from Microsoft has its ups and downs, but the only bad thing that I can say at this point is that documentation is horrendous.
We're in discussions now to decide whether or not to continue using it or to go back to maintaining our own synchronization subsystem. YMMV on it, but for us, it was a quick fix to the issue.
You're definitely pushing the stability envelope for CE, aren't you?
When I've done this, I've found it necessary to add in a fair amount of conflict tolerance, by not thinking of it so much as synchronization as simultaneous asynchronous data collection, with intermittent mutual updates and/or refreshes. In particular, I've always avoided using identity columns for anything. If you can strictly adhere to true Primary Keys based on real (not surrogate) data, it makes things easier. Sometimes a PK comprising SourceUnitNumber and timestamp works well.
If you can, view the remotely collected data as a simple timestamped, sourceided, userided log of cumulative chronologically ordered transactions. Going the other way, the host provides static validation info which never needs to go back - send back the CRUD transactions instead.
Post back how this turns out. I'm interested in seeing any kind of reliable Microsoft technology that helps with this.
TomH & le dorfier - I think that part of our problem is that we're allowing the customer to insert a large number of rows into one of the replicated table with an identity field. Its a scheduling application which can automatically multiple tasks up to a specified month/year. One of the times that it failed was around the time they entered 15000 rows into the table. We'll look into increasing the identity range.
The synchronization framework sounds interesting but sounds like it suffers from a similar problem to replication of having poor documentation. Trying to find help on replication is a bit of a nightmare and I'm not sure I want us to move to something with similar issues. Wish M'soft would stop releasing stuff that seems to have the support of beta s'ware!

Is anyone using the Service Broker in SQL Server? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When I attended a presentation of SQL Server 2008 at Microsoft, they did a quick gallup to see what features we were using. It turned out that in the entire lecture hall, my company was the only one using the Service Broker. This surprised me a lot, as I thought that more people would be using it.
My experience with SB is that it does it's job well, but is pretty tough to administer and it's hard to get an overview.
So, have you considered using the Service Broker? If not, why not? Did you go for MSMQ instead? Is there anything in SQL Server 2008 that would make you consider using the Service Broker.
I've been using SQL Service Broker since a couple of months after SQL 2005 was released. We use it non-stop here sending hundreds of thousands of messages through it per day.
We use it to load data from staging tables to production tables so that the service that loads the staging table doesn't have to wait for the data to actually process, it can go back and get more data to load.
We use it to queue the deletion of files from the file system. (When the row is deleted the file needs to be deleted as well.)
At prior companies I've used it to print loan documents and the checks that were sent out to the customers.
I even used Service Broker to do ETL from an OLTP database to an OLAP database for real time reporting.
Most people (especially DBAs) don't like Service Broker because there isn't any UI for it. If you want to use service broker or see what its doing you have to actually write and run some T/SQL.
I have been using SB in 2005 for about two years now with one implementation handling several hundred thousand messages a day. I would say the biggest challenge has been not so much in the architecture but understanding all the nuances involved. The documentation from Microsoft is poor with very few practical examples. Remus Rusanu's blogs have really been helpful in doing things like dialog reuse and activation stored procedure tuning. I have found it's REALLY important to reuse dialogs as much as possible (and working through all the associated locking involved with that) as well as handling multiple received messages as a set rather than one at a time.
Monitoring SB can be a pain. You basically depend on a bunch of system views to tell you what's going on. Orphaned messages are a pain. There's just a lot of little gotchas that can, well, getcha.
Aside from the problems, and there aren't THAT many, I think it has really worked out better than I expected it to. Since SB is integrated into the database, there's no separate message queues to back up outside the database. It's all transactionally consistent. Performance is good. It's a great solution.
I would use it again and will continue to use it.
At my current company, our usage of SB is somewhat different to that of the other posters. We use SB in SQL2005 mainly as a management tool. For example, we use it to manage updates to a small set of mutable tables that are present in a large number of otherwise immutable databases. All the messages are between services running on the same instance and the message volume is very low.
My experience with SB has been that it can be somewhat 'fiddly' to setup correctly and, as you mentioned in your question, it is hard to get an overview of the state of SB because there is not a single monitoring tool.
Nevertheless, we have found it hugely valuable as a way to automate a lot of database management tasks in a traceable and reliable way.
I have recently considered using Service Broker for a project, but yes, decided to go for MSMQ instead.
Our architecture consisted of a number of (clustered) servers, each needing to write information into a single instance of SQL reliably.
As I understand it, SB only works for SQL to SQL communication, so we would have needed an instance of SQL on each clustered box. We felt this was a bit unnecessary, hence using MSMQ
To be honest, i'm can't think of a scenario where I would use SB - I'm interested in knowing a bit more about your scenario, to see if I'm missing something vital.
Service Broker can be used in various cases where automation is required to be done in the distributed architecture.
Such applications receiving events from various devices and need processing to be done reliably. Where events from devices (detection) or sensors are used for processing the logic of automation. To do exchange of data between multiple database or applications.
I hope the implementation can be more secured and reliable with SB

Resources