Is anyone using the Service Broker in SQL Server? [closed] - sql-server

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When I attended a presentation of SQL Server 2008 at Microsoft, they did a quick gallup to see what features we were using. It turned out that in the entire lecture hall, my company was the only one using the Service Broker. This surprised me a lot, as I thought that more people would be using it.
My experience with SB is that it does it's job well, but is pretty tough to administer and it's hard to get an overview.
So, have you considered using the Service Broker? If not, why not? Did you go for MSMQ instead? Is there anything in SQL Server 2008 that would make you consider using the Service Broker.

I've been using SQL Service Broker since a couple of months after SQL 2005 was released. We use it non-stop here sending hundreds of thousands of messages through it per day.
We use it to load data from staging tables to production tables so that the service that loads the staging table doesn't have to wait for the data to actually process, it can go back and get more data to load.
We use it to queue the deletion of files from the file system. (When the row is deleted the file needs to be deleted as well.)
At prior companies I've used it to print loan documents and the checks that were sent out to the customers.
I even used Service Broker to do ETL from an OLTP database to an OLAP database for real time reporting.
Most people (especially DBAs) don't like Service Broker because there isn't any UI for it. If you want to use service broker or see what its doing you have to actually write and run some T/SQL.

I have been using SB in 2005 for about two years now with one implementation handling several hundred thousand messages a day. I would say the biggest challenge has been not so much in the architecture but understanding all the nuances involved. The documentation from Microsoft is poor with very few practical examples. Remus Rusanu's blogs have really been helpful in doing things like dialog reuse and activation stored procedure tuning. I have found it's REALLY important to reuse dialogs as much as possible (and working through all the associated locking involved with that) as well as handling multiple received messages as a set rather than one at a time.
Monitoring SB can be a pain. You basically depend on a bunch of system views to tell you what's going on. Orphaned messages are a pain. There's just a lot of little gotchas that can, well, getcha.
Aside from the problems, and there aren't THAT many, I think it has really worked out better than I expected it to. Since SB is integrated into the database, there's no separate message queues to back up outside the database. It's all transactionally consistent. Performance is good. It's a great solution.
I would use it again and will continue to use it.

At my current company, our usage of SB is somewhat different to that of the other posters. We use SB in SQL2005 mainly as a management tool. For example, we use it to manage updates to a small set of mutable tables that are present in a large number of otherwise immutable databases. All the messages are between services running on the same instance and the message volume is very low.
My experience with SB has been that it can be somewhat 'fiddly' to setup correctly and, as you mentioned in your question, it is hard to get an overview of the state of SB because there is not a single monitoring tool.
Nevertheless, we have found it hugely valuable as a way to automate a lot of database management tasks in a traceable and reliable way.

I have recently considered using Service Broker for a project, but yes, decided to go for MSMQ instead.
Our architecture consisted of a number of (clustered) servers, each needing to write information into a single instance of SQL reliably.
As I understand it, SB only works for SQL to SQL communication, so we would have needed an instance of SQL on each clustered box. We felt this was a bit unnecessary, hence using MSMQ
To be honest, i'm can't think of a scenario where I would use SB - I'm interested in knowing a bit more about your scenario, to see if I'm missing something vital.

Service Broker can be used in various cases where automation is required to be done in the distributed architecture.
Such applications receiving events from various devices and need processing to be done reliably. Where events from devices (detection) or sensors are used for processing the logic of automation. To do exchange of data between multiple database or applications.
I hope the implementation can be more secured and reliable with SB

Related

creating database for each new company [duplicate]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am building a SAAS application and we are discussing about one database per client vs shared databases. I have read a lot, incluisve some topics here at SO but I have still many doubts.
Our plataform should be highly customizable by each client. (they should be able to have custom tables and add custom fields to existing tables).
The multiple database aproach seems great in this case.
The problem is. should my "users" table be in the master database or in each client database?.
A user might have one or more organizations, so it would be present in multiple databases.
Also, what about generic tables like countries table, etc?
It makes sense to be in the master database. But I have many tables with a created_by field which have a foreign key to the user. Also have some permission related tables by client.
I would loose the power of foreign keys if multiple databases, which means more queries to the database. I know I can use cross-join between databases if they are in the same server but then i loose scalability. (I might need to have multiple database servers in future).
I have tought about federated tables. Not sure about performance.
The technologies I am using are php and symfony 2 framework and mysql for the database.
Also, I am afraid about the maintenance of such a system. We could create some scripts to automate the schema changes in all databases, but if we have 10k clients that would mean 10k databases.
What is your opiniion about this?
The main caracteristic of my app should be flexibility so if a client needs something more specific than the base plataform doesnt have, it should be possible to do it for him.
Some classic problems here. Have you ever been to http://highscalability.com/? Some good case studies there.
From personal experience if you try to share clients on one server, you will find that a very successful/active user will take up all the resources of the machine over time. We had one client in a SAAS that destroyed a shared server and we had to move him somewhere else.
I would rip out global enumerations into a service. You can make one central database for things like list of countries, list of states, etc. and put it behind a web service layer. Also in that database you can have user management/managing what server belongs to what user etc. You can make a management portal that reads/writes to this database for managing your user base.
If I was doing a SAAS again, I would start small and wait for the pain to hit. What you really want are good tools to address the scaling issues when they happen. Have some scripts ready to do rolling schema changes across servers (no way to avoid this once you have more than one server). Have scripts to take down machines while you are modifying the schema. Have scripts to migrate a user from a shared server to a dedicated one.
Consider setting up replication from a central database. This would pump down global information that each user partition/database would need without you having to write a lot of code.
But the biggest piece of advice I've seen - and experienced first hand - don't try too hard to build the next Facebook for scale. Start simple and see what actually happens before worrying about major scalability issues. You might be surprised as the user base grows what scales well and what does not.

a system design question

I was asking the following question during interviewing in a company working on cloud computing, and did not answer well. Any suggestions on how to analyze this question will be greatly appreciate.
Our company has hundreds of millions of users and we expect zero down time in production, explain techniques and programming practices that help improve redundancy and fail-over capabilities for front-end, middle-tier and back-end services including database services.
This question is very much along the lines of the "Impossible Question" from Joel. There is no right answer to this question.
I would start breaking this down into a list of all possible failure points:
Database Server
Database
Middle Tier
Middle Tier Server
Application
Web Server
Then for each one of them, I would identify reasons for breakage, and how to recover from it without having downtime. The ones that I do not know the answers to, I would profess to as much.
For example, Let's build a list of reasons a Database server goes down. Since we are looking for 100% uptime, we ignore nothing - no matter how far fetched
Hardware goes bad
Power goes down
Network card goes bad
Operating System unexpectedly crashes
O.S. Upgrades break system
Dumb System Admin or DBA
Dumb Janitor
Some Possible solutions (considering SQL Server on Windows back-end)
Lock on door
Database Mirroring (with regular failover testing)
Multiple NICS
Clustering (with regular failover testing)
Get better people
You can basically keep answering this question until the interviewer throws in the towel because there really isn't the One-Right-Answer to this question.
That's a pretty broad question. If they expect zero downtime, tell them to forget about it or turn all of their profits over to building redundancy. Now, if they just want "five 9's, or 99.999% uptime" then we can talk. :)
You can usually answer these kinds of questions with the usual canned blather about building a sustainable, automatic, build environment that includes extensive unit testing. Using design patterns like MVC or similar can help with testability. Perform regular security audits. This is much bigger than just a development question, this is a question about network and server architecture, maintaining secondary and tertiary data centers, etc. These kinds of question really give you a chance to make the interviewer feel important.

Replication vs Sync Framework vs Service Broker

I've asked about each of these technologies separately, and really haven't found a suitable answer.
We have a server in our central office running SQL Server 2005 Enterprise that has several large (large in the sense that DSL is the limiting factor) databases that we need local copies of at each of our locations. We currently have a few dozen locations, and are needing to bring even more online. The total number of locations we'll need to sync these databases to will be in the several hundreds in the next 2 years.
We are trying to overcome issues with the WAN connection at each location. These are DSL lines and the wiring at the locations isn't always the best. We currently have issues with some of the locations going down as often as every hour. While we are working to resolve these issues with rewiring and assistance from the local telcos, it mainly highlights the problem at hand: we need a two-way sync that can handle being occasionally-connected.
We tried transactional replication for a while, and while it worked some of the time, it was too high maintenance for us, and it seemed to randomly error out often with no possible explanations, forcing us to reinitialize subscriptions (which could take upwards of 4 hours assuming the location would stay connected long enough to get the entire snapshot in one go). We've looked at rolling our own solution from scratch, but I don't feel this would be the best idea given the scale and reliability we are needing.
So far we've also looked at Sync Framework, and as suggested by someone else, Service Broker. Sync Framework seems a better fit, but I was told that Service Broker scales better and is more reliable? I can't find any empirical data on the overhead involved with Sync Framework or Service Broker, so it's proving impossible to compare the two in this regard.
What we really need is a two-way sync between the central office server and a remote client that can run autonomously and can report to an admin in the event of a failure that requires our intervention.
There are so many possible solutions to this problem, all involving completely different technologies, that I need a fresh eye on this.
What do you think would be the optimal solution for our situation, and why?
EDIT: Obviously, upgrading to SQL Server 2008 would solve this problem easily. However, we would like to try to less expensive options first.
I don't have any hard data to offer on this, but we used the sync framework on a project a while ago. My experience with it is really bad. It's slow (even when synchronizing relatively small tables across a LAN), scales terribly and requires a lot of work to manually handle error conditions (it'll happy produce larger packets than WCF can handle by default -- and is only able to split updates into batches when syncing one way, not the other.) And it only works with a few select databases (the client must use MS SQL Compact Edition, as I recall), unless you're willing to write your own SyncAdapter.
Overall, a lot of work just to get a fragile and inefficient solution to your problem. I wouldn't recommend it.
You can Use sync framwork with SQL express 2008 R1/R2 on one end and multitenant db SQl server enterprise on central end. Below is the sample application for n-tier sync over secure WCF channel.You could write windows service to sync data from backend:
http://www.rajneeshnoonia.com/blog/2012/03/n-tier-sync-framework/
It sould be capable enough to handle large number of clients (thousands).
I think we'll look into the SQL Server 2008 upgrade route. It seems the native change tracking support will be the easiest way to accomplish this.

How to monitor tables in SQL Server for changes

This question was asked quite some time ago, and while it covers possible solutions for SQL 2005 and 2008, it lacks a good solution for SQL 2000, which is still far too common.
I need a way to monitor certain fields of a database table for changes, and notify my application when these changes occur so that I can blast them out on the local network as broadcast messages where anyone with a client can listen for them and display them as alerts (think something similar to stock market data reaching specific thresholds).
I do NOT want to poll the database for several reasons. 1) I don't wish to add additional load to the servers. 2) I would rather get notifications in near real-time rather than wait for the polling frequency to expire.
Now, I could put logic in the applications that update the database, but the data can be updated from several sources, including the web and I don't want to deal with web servers sending notifications across DMZ boundaries, etc.. And I don't want to have to maintain this in 20 different applications (the more overpowering issue).
I've seen this done on SQL 2000 using extended stored procs and triggers, but the xp's seem to be difficult to make cross platform, and they break when installed on SQL 2005 and 2008. Maybe that's just bad code in the examples i've seen, i'm not sure, but I am looking for something that works in SQL 2000 and later versions.
Any ideas?
EDIT:
I've thought about dropping support for 2000, but that really doesn't solve my problem. I would like a solution that is going to continue to work for years to come. One problem with many microsoft technologies is that they drop support for them. For instance, Notification services does what I need it to do, but they decided to deprecate that in 2008 and it won't be available in the next version. So i'm looking for a solution that has a good chance of sticking around.
Very simple solution
You could have a trigger that calls a webpage, notifying of an update.
This may be quite bad, because if the server can't get to the web, for some reason, it may make the insert operation quite slow. Also, depending on the frequency of inserts, it could be equally bad.
Alternative plan
In a trigger, write to a queue. (I happen to be in love with MSMQ). Then, have something waiting against that queue, and you will get the messages in 'real time'. Again, it's prone to the frequency of updates, as above.
Better plan
Have a trigger that posts the data to a 'tblUpdatedThings' table, which you then poll. But I know you don't want to poll. Regardless, I consider this better, due to the reasons I describe.
You want your solution to be in the database, but you want it to be database-independent. You can't have it both ways. Pick one. If you want to be independent of the database, don't allow the sources to write to the database directly, but to call a central service that you control, and where you can trap any events of interest to you.
If you want to use database functionality without polling, you have to deploy code that the database invokes, and you will have a dependency on future versions supporting your code.

Where to put your code - Database vs. Application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have been developing web/desktop applications for about 6 years now. During the course of my career, I have come across application that were heavily written in the database using stored procedures whereas a lot of application just had only a few basic stored procedures (to read, insert, edit and delete entity records) for each entity.
I have seen people argue saying that if you have paid for an enterprise database use its features extensively. Whereas a lot of "object oriented architects" told me its absolute crime to put anything more than necessary in the database and you should be able to drive the application using the methods on those classes?
Where do you think is the balance?
Thanks,
Krunal
I think it's a business logic vs. data logic thing. If there is logic that ensures the consistency of your data, put it in a stored procedure. Same for convenience functions for data retrieval/update.
Everything else should go into the code.
A friend of mine is developing a host of stored procedures for data analysis algorithms in bioinformatics. I think his approach is quite interesting, but not the right way in the long run. My main objections are maintainability and lacking adaptability.
I'm in the object oriented architects camp. It's not necessarily a crime to put code in the database, as long as you understand the caveats that go along with that. Here are some:
It's not debuggable
It's not subject to source control
Permissions on your two sets of code will be different
It will make it more difficult to track where an error in the data came from if you're accessing info in the database from both places
Anything that relates to Referential Integrity or Consistency should be in the database as a bare minimum. If it's in your application and someone wants to write an application against the database they are going to have to duplicate your code in their code to ensure that the data remains consistent.
PLSQL for Oracle is a pretty good language for accessing the database and it can also give performance improvements. Your application can also be much 'neater' as it can treat the database stored procedures as a 'black box'.
The sprocs themselves can also be tuned and modified without you having to go near your compiled application, this is also useful if the supplier of your application has gone out of business or is unavailable.
I'm not advocating 'everything' should be in database, far from it. Treat each case seperately and logically and you will see which makes more sense, put it in the app or put it in the database.
I'm coming from almost the same background and have heard the same arguments. I do understand that there are very valid reasons to put logic into the database. However, it depends on the type of application and the way it handles data which approach you should choose.
In my experience, a typical data entry app like some customer (or xyz) management will massively benefit from using an ORM layer as there are not so many different views at the data and you can reduce the boilerplate CRUD code to a minimum.
On the other hand, assume you have an application with a lot of concurrency and calculations that span a lot of tables and that has a fine-grained column-level security concept with locking and so on, you're probably better off doing stuff like that directly in the database.
As mentioned before, it also depends on the variety of views you anticipate for your data. If there are many different combinations of columns and tables that need to be presented to the user, you may also be better off just handing back different result sets rather than map your objects one-by-one to another representation.
After all, the database is good at dealing with sets, whereas OO code is good at dealing with single entities.
Reading these answers, I'm quite confused by the lack of understanding of database programming. I am an Oracle Pl/sql developer, we source control for every bit of code that goes into the database. Many of the IDEs provide addins for most of the major source control products. From ClearCase to SourceSafe. The Oracle tools we use allow us to debug the code, so debugging isn't an issue. The issue is more of logic and accessibility.
As a manager of support for about 5000 users, the less places i have to look for the logic, the better. If I want to make sure the logic is applied for ALL applications that use the data , even business logic, i put it in the DB. If the logic is different depending on the application, they can be responsible for it.
#DannySmurf:
It's not debuggable
Depending on your server, yes, they are debuggable. This provides an example for SQL Server 2000. I'm guessing the newer ones also have this. However, the free MySQL server does not have this (as far as I know).
It's not subject to source control
Yes, it is. Kind of. Database backups should include stored procedures. Those backup files might or might not be in your version control repository. But either way, you have backups of your stored procedures.
My personal preference is to try and keep as much logic and configuration out of the database as possible. I am heavily dependent on Spring and Hibernate these days so that makes it a lot easier. I tend to use Hibernate named queries instead of stored procedures and the static configuration information in Spring application context XML files. Anything that needs to go into the database has to be loaded using a script and I keep those scripts in version control.
#Thomas Owens: (re source control) Yes, but that's not source control in the same sense that I can check in a .cs file (or .cpp file or whatever) and go and pick out any revision I want. To do that with database code requires a potentially-significant amount of effort to either retrieve the procedure from the database and transfer it to somewhere in the source tree, or to do a database backup every time a minor change is made. In either case (and regardless of the amount of effort), it's not intuitive; and for many shops, it's not a good enough solution either. There is also the potential here for developers who may not be as studious at that as others to forget to retrieve and check in a revision. It's technically possible to put ANYTHING in source control; the disconnect here is what I would take issue with.
(re debuggable) Fair enough, though that doesn't provide much integration with the rest of the application (where the majority of the code could live). That may or may not be important.
Well, if you care about the consistency of your data, there are reasons to implement code within the database. As others have said, placing code (and/or RI/constraints) inside the database acts to enforce business logic, close to the data itself. And, it provides a common, encapsulated interface, so that your new developer doesn't accidentally create orphan records or inconsistent data.
Well, this one is difficult. As a programmer, you'll want to avoid TSQL and such "Database languages" as much as possible, because they are horrendous, difficult to debug, not extensible and there's nothing you can do with them that you won't be able to do using code on your application.
The only reasons I see for writing stored procedures are:
Your database isn't great (think how SQL Server doesn't implement LIMIT and you have to work around that using a procedure.
You want to be able to change a behaviour by changing code in just one place without re-deploying your client applications.
The client machines have big calculation-power constraints (think small embedded devices).
For most applications though, you should try to keep your code in the application where you can debug it, keep it under version control and fix it using all the tools provided to you by your language.

Resources