distributed system: sql server broker implementation - sql-server

I have a distributed application consisting of 8 servers, all running .NET windows services. Each service polls the database for work packages that are available.
The polling mechanism is important for other reasons (too boring for right now).
I'm thinking that this polling mechanism would be best implemented in a queue as the .NET services will all be polling the database regularly and when under load I don't want deadlocks.
I'm thinking I would want each .NET service to put a message into an input queue. The database server would pop each message of the input queue one at a time, process it, and put a reply message on another queue.
The issue I am having is that most examples of SQL Server Broker (SSB) are between database services and not initiated from a .NET client. I'm wondering if SQL Server Broker is just the wrong tool for this job. I see that the broker T-SQL DML is available from .NET but the way I think this should work doesn't seem to fit with SSB.
I think that I would need a single SSB service with 2 queues (in and out) and a single activation stored procedure.
This doesn't seem to be the way SSB works, am I missing something ?

You got the picture pretty much right, but there are some missing puzzle pieces in that picture:
SSB is primarily a communication technology, designed to deliver messages across the network with exactly-once-in-order semantics (EOIO), in a fully transactional fashion. It handles network connectivity (authentication, traffic confidentiality and integrity) and acknowledgement and retry logic for transmission.
Internal Activation is an unique technology in that it eliminates the requirement for a resident service to poll the queue. Polling can never achieve the dynamic balance needed for low latency and low resource consumption under light load. Polling forces either high latency (infrequent polling to save resources) or high resource utilization (frequent polling required to provide low latency). Internal activation also has self-tunning capability to ramp up more processors to answer to spikes in load (via max_queue_readers) while at the same time still being capable of tuning down the processing under low load, by deactivating processors. One of the often overlooked advantages of the Internal Activation mechanism is the fact that is fully contained within a database, ie. it fails over with a cluster or database mirroring failover, and it travels with the database in backups and copy-attach operations. There is also an External Activation mechanism, but in general I much more favor the internal one for anything that fits an internal context (Eg. not and HTTP request, that must be handled outside the engine process...)
Conversation Group Locking is again unique and is a means to provide exclusive access to correlated message processing. Application can take advantage by using the conversation_group_id as a business logic key and this pretty much completely eliminates deadlocks, even under heavy multithreading.
There is also one issue which you got wrong about Service Broker: the need to put a response into a separate queue. Unlike most queueing products with which you may be familiar, SSB primitive is not the message but a 'conversation'. A conversation is a fully duplex, bidirectional communication channel, you can think of it much like a TCP socket. An SSB service does not need to 'put responses in a queue' but instead it can simply send a response on the conversation handle of the received message (much like how you would respond in a TCP socket server by issue a send on the same socket you got the request from, not by opening a new socket and sending a response). Furthermore SSB will take care of the inherent message correlation so that the sender will know exactly the response belong to which request it sent, since the response will come back on the same conversation handle the request was sent on (much like in a TCP socket case the client receives the response from the server on the same socket it sent the request on). This conversation handle is important again when it comes to the correlation locking of related conversation groups, see the link above.
You can embed .Net logic into Service Broker processing via SQLCLR. Almost all SSB applciations have a non-SSB service at at least one end, directly or indirectly (eg. via a trigger), a distributed application entirely contained in the database is of little use.

Related

About ringpop, application layer sharding, used in Uber

https://ringpop.readthedocs.org/en/latest/
To my understanding, the sharding can be implemented in some library routines, and the application programs are just linked with the library. If the library is a RPC client, the sharding can be queried from the server side in real-time. So, even if there is a new partition, it is transparent to the applications.
Ringpop is application-layer sharding strategy, based on SWIM membership protocol. I wonder what is the major advantage at the application layer?
What is the other side, say the sharding in the system layer?
Thanks!
Maybe a bit late for this reply, but maybe someone still needs this information.
Ringpop has introduced the idea of 'sharding' inside application rather then data. It works more or less like an application level middleware, but with the advantage that it offers an easy way to build scalabale and fault-tolerance applications.
The things that Ringpop shards are the requests coming from clients to a specific service. This is one of its major advantages (there are mores, keep reading).
In a traditional SOA architecure, all requests for a specific serveice goes to a unique system that dispatch them among the workers for load balancing. These workers do not know each other, they are indipendent entities and cannot communicate between them. They do their job and sent back a reply.
Ringpop is the opposite: the workers know each other and can discover new ones, regularly talk among them to check their healthy status, and spread this information with the other workers.
How Ringpop shard the request?
It uses the concept of keyspaces. A keyspace is just a range of number, e.g. you are free to choice the range you like, but the obvious choice is hash the IDs of the objects in the application and use the hashing-function's codomain as range.
A keyspace can be imaginated as an hash "ring", but in practice is just a 4 or 8 byte integer.
A worker, e.g. a node that can serve a request for a specific service, is 'virtually' placed on this ring, e.g. it owns a contiguous portion of the ring. In practice, it has assigned a sub-range. A worker is in charge to handle all the requests belonging to its sub-range. Handle a request means two things:
- process the request and provide a response, or
- forward the request to another service that actually knows how to serve it
Every application is build with this behaviour embedded. There is the logic to handle a request or just forward it to another service that can handle it. The forwarding mechanism is nothing more than a remote call procedure, which is actually made using TChannel, the Uber's high performance forwarding for general RPC.
If you think on this, you can figure out that Ringpop is actually offering a very nice thing that traditionals SOA architecture do not have. The clients don't need to know or care about the correct instance that can serve their request. They can just send a request anywhere in Ringpop, and the receiver worker will serve it or forward to the rigth owner.
Ringpop has another interesting feature. New workers can dinamically enter the ring and old workers can leave the ring (e.g. because a crash or just a shutdown) without any service interrputions.
Ringpop implements a membership protocol based on SWIM.
It enable workers to discover each another and exclude a broken worker from the ring using a tcp-based gossip protocol. When a new worker is discovered by another worker, a new connection is established between them. Every worker map the status of the other workers sending a ping request at regular time intervals, and spread the status information with the other workers if a ping does not get a reply (e.g. piggyback membership update on a ping / gossip based)
These 3 elements consistent hashing, request forwarding and a membership protocol, make Ringpop an interesting solution to promote scalability and fault tolerance at application layer while keeping the complexity and operational overhead to a minimum.

I want to log all mqtt messages of the broker. How should I design schema of database. Avoiding dulplicate entries and fast searching

I am implementing a callback in java to store messages in a database. I have a client subscribing to '#'. But the problem is when this # client disconnects and reconnect it adds duplicate entries in the database of retained messages. If I search for previous entries bigger tables will be expensive in computing power. So should I allot a separate table for each sensor or per broker. I would really appreciate if you suggest me better designs.
Subscribing to wildcard with a single client is definitely an anti-pattern. The reasons for that are:
Wildcard subscribers get all messages of the MQTT broker. Most client libraries can't handle that load, especially not when transforming / persisting messages.
If you wildcard subscriber dies, you will lose messages (unless the broker queues endlessly for you, which also doesn't work)
You essentially have a single point of failure in your system. Use MQTT brokers which are hardened for production use. These are much more robust single point of failures than your hand-written clients. (You can overcome the SIP through clustering and load balancing, though).
So to solve the problem, I suggest the following:
Use a broker which can handle shared subscriptions (like HiveMQ or MessageSight), so you can balance all messages between many clients
Use a custom plugin for doing the persistence at the broker instead of the client.
You can also read more about that topic here: http://www.hivemq.com/blog/mqtt-sql-database
Also consider using QoS = 3 for all message to make sure one and only one message is delivered. Also you may consider time-stamp each message to avoid inserting duplicate messages if QoS requirement is not met.

Is RabbitMQ, ZeroMQ, Service Broker or something similar an appropriate solution for creating a high availability database webservice?

I have a CRUD webservice, and have been tasked with trying to figure out a way to ensure that we don't lose data when the database goes down. Everyone is aware that if the database goes down we won't be able to get "reads" but for a specific subset of the operations we want to make sure that we don't lose data.
I've been given the impression that this is something that is covered by services like 0MQ, RabbitMQ, or one of the Microsoft MQ services. Although after a few days of reading and research, I'm not even certain that the messages we're talking about in MQ services include database operations. I am however 100% certain that I can queue up as many hello worlds as I could ever hope for.
If I can use a message queue for adding a layer of protection to the database, I'd lean towards Rabbit (because it appears to persist through crashes) but since the target is a Microsoft SQL server databse, perhaps one of their solutions (such as SQL Service Broker, or MSMQ) is more appropriate.
The real fundamental question that I'm not yet sure of though is whether I'm even playing with the right deck of cards (so to speak).
With the desire for a high-availablity webservice, that continues to function if the database goes down, does it make sense to put a Rabbit MQ instance "between" the webservice and the database? Maybe the right link in the chain is to have RabbitMQ send messages to the webserver?
Or is there some other solution for achieving this? There are a number of lose ideas at the moment around finding a way to roll up weblogs in the event of database outage or something... but we're still in early enough stages that (at least I) have no idea what I'm going to do.
Is message queue the right solution?
Introducing message queuing in between a service and it's database operations is certainly one way of improving service availability. Writing to a local temporary queue in a store-and-forward scenario will always be more available than writing to a remote database server, simply by being a local operation.
Additionally by using queuing you gain greater control over the volume and nature of database traffic your database has to handle at peak. Database writes can be queued, routed, and even committed in a different order.
However, in order to do this you need to be aware that when a database write is performed it is processed off-line. Even under conditions where this happens almost instantaneously, you are losing a benefit that the synchronous nature of your current service gives you, which is that your service consumers can always know if the database write operation is successful or not.
I have written about this subject before here. The user posting the question had similar concerns to you. Whether you do this or not is a decision you have to make based on whether this is something your consumers care about or not.
As for the technology stacks you are thinking of this off-line model is implementable with any of them pretty much, with the possible exception of Service broker, which doesn't integrate well with code (see my answer here: https://stackoverflow.com/a/45690344/569662).
If you're using Windows and unlikely to need to migrate, I would go for MSMQ (which supports durable messaging via transactional queues) as it's lightweight and part of Windows.

Good Strategy for Message Queuing?

I'm currently designing an application which I will ultimately want to move to Windows Azure. In the short term, however, it will be running on a server which I will host myself.
The application involves a number of separate web applications - some of these are essentially WCF services which receive data, and some are sites for users to manage data. In addition, there will need to be a worker service running in the background which will process data in various ways.
I'm very keen to use a decoupled architecture for this. Ideally I'm wanting the components (i.e. web apps and worker service) to know as little as possible about each other. It seems like using a message queue will be the best solution here - the web apps can enqueue messages with work units into the queue and the worker service can pick them out and process them as needed.
However, I want to work out a good set of technologies for doing this, bearing in mind that I'll ultimately be moving to Azure and want to minimise the amount of re-work I'll need to do when I migrate to the cloud. Azure has a Queue component built in which looks ideal for my needs. What I'd like to do is create something myself which will mimic this as closely as possible.
It looks like there are several options (I'm using .NET on Windows, with a SQL Server 2005 back end) - the ones I've found so far are:
MSMQ
SQL Server service broker
Rolling my own using a database table and some stored procs
I was wondering if anyone has any suggestions for this - or if anyone has done anything similar and has advice on things to do/to avoid. I realise that every situation is different, but in this case I think my queuing requirements are pretty generic so I'd love to hear anyone else's thoughts about the best way to do this.
Thanks in advance,
John
If you have Azure in mind, perhaps you should start straight on Azure as the APIs and semnatics are significantly different between Azure queues and any of MSMQ or SSB.
A quick 3048 meters comparison of MSMQ vs. SSB (I'll leave a custom table-as-queue out of comparison as it really depends how you implement it...)
Deployment: MSMQ is a Windows component, SSB is a SQL compoenent. SSB requires a SQL instance to store any message, so disconencted clients need access to an instance (can be Express). MSMQ requires deployment of MSMQ on the client (part of OS, but optional install).
Programmability: MSMQ offers a fully fledged, supported, WCF channel. SSB offers only an experimental WCF channel at http://ssbwcf.codeplex.com
Performance: SSB will be significantly faster than MSMQ in transacted mode. MSMQ will be faster if let operate in untransacted mode (best effort, unordered, delivery)
Queriability: SSB queues can be SELECTE-ed uppon (view any message, full SQL JOIN/WHERE/ORDER/GROUP power), MSMQ queues can be peeked (only next message)
Recoverability: SSB queues are integrated in the database so they are backed up and restored with the database, keeping a consitent state with the application state. MSMQ queues are backed up in the NT file backup subsytem, so to keep the backup in sync (coherent) the queue and database have to be suspended.
Transactions (since every enqueue/dequeue is always accompanied by a database update): SSB is fully integrated in SQL so dequeueing and enqueueing are local transaction operations. MSMQ is a separate TM (Transaction Manager) so queue/dequeue has to be a Distributed Transaction operation to enroll both SQL and MSMQ in the transaction.
Management and Monitoring: both equaly bad. No tools whatsoever.
Correlated Messages processing: SSB can block processing of correlated message by concurent threads via built-in Conversation Group Locking.
Event Driven: SSB has Activation to launch stored procedures, MSMQ uses Windows Activation service. Similar. SSB though has self load balancing capalities due to the way WAITFOR(RECEIVE) and MAX_QUEUE_READERS interact.
Availability: SSB piggybacks on the SQL Server High Availability story, it can work either in a clustered or in database miroring environment. MSMQ rides the Windows clustering story only. Database Mirroring is much cheaper than clustering as a HA solution.
In addition I'd add that SSB and MSMQ differ significantly at the level ofthe primitive they offer: SSB primitive is a conversation, while MSMQ primitive is a message. Think TCP vs. UDP semantics.
Pick a queue back end that works for you, or that is better suited to your environment. #Remus has given a great comparison between MSMQ and SSB. MSMQ is going to be the easier one to implement, but has some notable limitations, while SSB is going to feel very heavy as its at the other end of the spectrum.
Have It Your Way
To minimize the rework from you applications, abstract the queues access behind an interface, and then provide an implementation for the queue transport you ultimately decide to go with. When its time to move to Azure, or another queue transport, you just provide a new implementation of your interface.
You get to control the semantics of how you want to interact with the queue to give a consistent usable API from your applications.
A rough idea might be:
interface IQueuedTransport
{
void SendMessage(XmlDocument);
XmlDocument ReceiveMessage();
}
public class MSMQTransport : IQueuedTransport {}
public class AzureQueueTransport : IQueuedTransport {}
You may not be building the be-all queuing transport, just what meets your needs. If you work with Xml, pass xml. If you work in byte arrays, pass byte arrays. :)
Good luck!
Z
Use Win32 Mailslots. They will be reliable on a single server, are easy to implement, and do not require any extra software.

Implementing Comet on the database-side

This is more out of curiosity and "for future reference" than anything, but how is Comet implemented on the database-side? I know most implementations use long-lived HTTP requests to "wait" until data is available, but how is this done on the server-side? How does the web server know when new data is available? Does it constantly poll the database?
What DB are you using? If it supports triggers, which many RDBMSs do in some shape or form, then you could have the trigger fire an event that actually tells the HTTP request to send out the appropriate response.
Triggers remove the need to poll... polling is generally not the best idea.
PostgreSQL seems to have pretty good support (even PL/Python).
this is very much application dependent. The most likely implementation is some sort of messaging system.
Most likely, your server side code will consist of quite a few parts:
a few app servers that hansle incoming requests,
a (separate) comet server that deals with all the open connections to clients,
the database, and
some sort of messaging infrastructure
the last one, the messaging infrastructure is really the key. This provides a way for the app servers to talk to the comet server. So when a request comes in the app server will put a message into the message queue telling the comet server to notify the correct client(s)
How messaging is implemented is, again, very much application dependent. A very simple implementation would just use a database table called messages and poll that.
But depending on the stack you plan on using there should be more sphisticated tools available.
In Rails I'm using Juggernaut which simply listens on some network port. Whenever there is data to send the Rails Application server opens a connection to this juggernaut push server and tells it what to send to the clients.

Resources