Camel SMPP reconnection strategy - apache-camel

I have an application with 2 routes that are in charge of sending sms messages.
We noticed that sometimes, the connection is lost (or seems to be lost). In those case, Camel tries to reconnect to the SMSC and what is happening is that in some cases instead of having 2 connections to the smsc we have 4 connections opened to the smsc.
The problem at smsc level we have a limit of connections available.
What could you advise to avoid that kind of behaviour ?
Is there a way to make sure that a new connection is not done until the initial connection is closed.

Related

I want to log all mqtt messages of the broker. How should I design schema of database. Avoiding dulplicate entries and fast searching

I am implementing a callback in java to store messages in a database. I have a client subscribing to '#'. But the problem is when this # client disconnects and reconnect it adds duplicate entries in the database of retained messages. If I search for previous entries bigger tables will be expensive in computing power. So should I allot a separate table for each sensor or per broker. I would really appreciate if you suggest me better designs.
Subscribing to wildcard with a single client is definitely an anti-pattern. The reasons for that are:
Wildcard subscribers get all messages of the MQTT broker. Most client libraries can't handle that load, especially not when transforming / persisting messages.
If you wildcard subscriber dies, you will lose messages (unless the broker queues endlessly for you, which also doesn't work)
You essentially have a single point of failure in your system. Use MQTT brokers which are hardened for production use. These are much more robust single point of failures than your hand-written clients. (You can overcome the SIP through clustering and load balancing, though).
So to solve the problem, I suggest the following:
Use a broker which can handle shared subscriptions (like HiveMQ or MessageSight), so you can balance all messages between many clients
Use a custom plugin for doing the persistence at the broker instead of the client.
You can also read more about that topic here: http://www.hivemq.com/blog/mqtt-sql-database
Also consider using QoS = 3 for all message to make sure one and only one message is delivered. Also you may consider time-stamp each message to avoid inserting duplicate messages if QoS requirement is not met.

distributed system: sql server broker implementation

I have a distributed application consisting of 8 servers, all running .NET windows services. Each service polls the database for work packages that are available.
The polling mechanism is important for other reasons (too boring for right now).
I'm thinking that this polling mechanism would be best implemented in a queue as the .NET services will all be polling the database regularly and when under load I don't want deadlocks.
I'm thinking I would want each .NET service to put a message into an input queue. The database server would pop each message of the input queue one at a time, process it, and put a reply message on another queue.
The issue I am having is that most examples of SQL Server Broker (SSB) are between database services and not initiated from a .NET client. I'm wondering if SQL Server Broker is just the wrong tool for this job. I see that the broker T-SQL DML is available from .NET but the way I think this should work doesn't seem to fit with SSB.
I think that I would need a single SSB service with 2 queues (in and out) and a single activation stored procedure.
This doesn't seem to be the way SSB works, am I missing something ?
You got the picture pretty much right, but there are some missing puzzle pieces in that picture:
SSB is primarily a communication technology, designed to deliver messages across the network with exactly-once-in-order semantics (EOIO), in a fully transactional fashion. It handles network connectivity (authentication, traffic confidentiality and integrity) and acknowledgement and retry logic for transmission.
Internal Activation is an unique technology in that it eliminates the requirement for a resident service to poll the queue. Polling can never achieve the dynamic balance needed for low latency and low resource consumption under light load. Polling forces either high latency (infrequent polling to save resources) or high resource utilization (frequent polling required to provide low latency). Internal activation also has self-tunning capability to ramp up more processors to answer to spikes in load (via max_queue_readers) while at the same time still being capable of tuning down the processing under low load, by deactivating processors. One of the often overlooked advantages of the Internal Activation mechanism is the fact that is fully contained within a database, ie. it fails over with a cluster or database mirroring failover, and it travels with the database in backups and copy-attach operations. There is also an External Activation mechanism, but in general I much more favor the internal one for anything that fits an internal context (Eg. not and HTTP request, that must be handled outside the engine process...)
Conversation Group Locking is again unique and is a means to provide exclusive access to correlated message processing. Application can take advantage by using the conversation_group_id as a business logic key and this pretty much completely eliminates deadlocks, even under heavy multithreading.
There is also one issue which you got wrong about Service Broker: the need to put a response into a separate queue. Unlike most queueing products with which you may be familiar, SSB primitive is not the message but a 'conversation'. A conversation is a fully duplex, bidirectional communication channel, you can think of it much like a TCP socket. An SSB service does not need to 'put responses in a queue' but instead it can simply send a response on the conversation handle of the received message (much like how you would respond in a TCP socket server by issue a send on the same socket you got the request from, not by opening a new socket and sending a response). Furthermore SSB will take care of the inherent message correlation so that the sender will know exactly the response belong to which request it sent, since the response will come back on the same conversation handle the request was sent on (much like in a TCP socket case the client receives the response from the server on the same socket it sent the request on). This conversation handle is important again when it comes to the correlation locking of related conversation groups, see the link above.
You can embed .Net logic into Service Broker processing via SQLCLR. Almost all SSB applciations have a non-SSB service at at least one end, directly or indirectly (eg. via a trigger), a distributed application entirely contained in the database is of little use.

If HTTP is stateless why do I need to close a database connection?

A commonly problem I see in lots of web languages is that database connections need to be closed otherwise the number of total connections gradually increases and then it grinds to a halt in whatever form.
HTTP is stateless, when the request has finished processing why can't these languages just drop any connections that request opened? Are there any legitimate reasons for why you might keep it open?
Because the cost of opening, authenticating and authorising access to a database is quite expensive. That is why normally everybody uses a databases connection pool. Connections are still open while request handlers pick up a available-already-opened connection from a pool. When one closes a connection what is really happening is that the connection is being freed for others to use.
To answer ...
why can't these languages just drop
any connections that request opened?
Are there any legitimate reasons for
why you might keep it open?
Connections might stay opened after the request is complete and use for other purposes. For instance asynchronous updates of data. But I am with you, in 90% of the cases when the request is finished the connections opened should be returned back to the pool. Depending on the Web Framework you use (Spring, DJANGO, ...) this kind of behaviour can be configured or at least implemented with minimum effort.
Checking for an open connection while closing an http connection gives more overhead so I guess that's why some languages don't close it by default.
And if you don't close it explicitly, it will have to be done by the garbage collector which can take a while.

Diagnosing poor Sql Server Service Broker forwarding performance

I've been running Service Broker in my development environment for a few months now and have had perfectly adequate performance, toward 1000 message per second (plenty for my needs).
I've also been running on a cut-down replica of my real production environment which involves a forwarding instance, and for the 1st time today pushed some load through it with terrible results! I'm trying to understand what I've been seeing, but am struggling a bit so I thought I'd put it out to see if anyone can help.
Firstly, messages are being delivered from start, to end, through the forwarder. However when I pushed a few thousand messages, I saw batches of between 20 to 100 being sent followed by delays of a minute or two. The messages are ultimately processed successfully.
Looking at the queue on the Store (the initial sender) there are thousands of messages sat waiting to be forwarded which are trickling out.
The security setup goes like:
Store database -> Certificate -> Forwarding instance -> Windows Security -> Central database
When I switch on profilers I am seeing lots errors:
Some examples on the forwarding instance:
7 - Send IO Error (10054(failed to retrieve text for this error. Reason: 15105))
Forwarded Message Dropped (The forwarded message has been dropped because a transport send error occurred when sending the message. Check previous events for the error.)
And on my 'central' target instance:
A corrupted message has been received. The binary message preamble is malformed.
Broker message undeliverable This message was dropped because it could not be dispatched on time. State: 2
Can anybody help by pointing me towards some checks I could make, or maybe something obvious that I've missed. I know I've got something wrong but just can't see what.
Edit - 14/1/2011 - more information:
Some more information on this - we took our message forwarding instance out of the equation and saw massive improvements immediately - 2000 messages were delivered in seconds.
The architecture uses transport security so we're currently trying to switch over to dialog security as we've read that transport security / forwarding can harm performance. We're hoping Dialog security will somehow optimize what needs to be decrypted by the forwarding instance therefore improving performance.
First thing Monday I want to switch off encryption on the transport layer (between initiator and forwarder) to see if that is where our bottleneck is occurring. Is it possible that this could cause a big overhead in our communications or should one forwarding instance not produce such a big bottleneck?
What SQL Server version?
There were several issues fixed with forwarding performance. I recommend you upgrade to latest SQL Server 2008 R2 and deploy latest cumulative updates. If upgrade is problematic in your environment, you can upgrade only the forwarder instance.
This might be a stupid suggestion, but have you changed the network topology lately? Maybe swapped out a network cable or overheated a switch? If this is occurring suddenly, it sounds more like a physical change than a logical one. I'd check the windows event log on both machines.
Yes, Dialog security is the best approach in conjunction with forwarders. Otherwise overhead will be enormous.

RabbitMQ and DB transactions

Does RabbitMQ support a scenario where a received message acknowledgement is sent on the DB transaction commit?
Currently we send ack after DB transaction commit. If service fails inbetween, we'll get data duplication - service will get the same message again.
Is there a pattern for this problem?
Thanks!
Yes it does, but do be aware that RabbitMQ uses its own DB for message storage (at the moment). To get RabbitMQ to send an ack to the publisher, use TX mode. This is documented in the spec and on various parts of our web site.
If you want to use your own DB then you may want to set it up as an end consumer for messages. In this case, you should use your own application-level ack.
Do feel free to email the rabbitmq-discuss for more info and questions.
HTH
alexis

Resources