Documentation for SupervisingRouteController - apache-camel

I have MQTT route like below
from("paho:mytopic?brokerUrl=tcp://0.0.0.0:1883&clientId=ipc)
.routeId("myroute")
.to("log:my?showAll=true&multiline=true");
it starts only if broker is available and after that if it lost connectivity with broker it handle it very well and resume.
But my concern is how i can start first time if broker is not available?
I searched on google and got to know  "SupervisingRouteController" might help in this regard, But no document is available how i can use it.
By some hit and trial i reach this point but what further i can do as no document available
final Main main = new Main();
main.addRouteBuilder(new MyMqttRoute());
SupervisingRouteController controller = main.getCamelContexts().get(0).getRouteController().unwrap(SupervisingRouteController.class);
main.run();

Here are two unit test cases that shows usage of SupervisingRouteController.
SupervisingRouteControllerTest.java
SupervisingRouteControllerRestartTest.java
These may be helpful in understanding its usage.

Related

How to schedule JMS consuming in Apache Camel?

I need to consume JMS messages with Camel everyday at 9pm (or from 9pm to 10pm to give it the time to consume all the messages).
I can't see any "scheduler" option for URIs "cMQConnectionFactory:queue:myQueue" while it exists for "file://" or "ftp://" URIs.
If I put a cTimer before it will send an empty message to the queue, not schedule the consumer.
You can use a route policy where you can setup for example a cron expression to tell when the route is started and when its stopped.
http://camel.apache.org/scheduledroutepolicy.html
Other alternatives is to start/stop the route via the Java API or JMX etc and have some other logic that knows when to do that according to the clock.
This is something that has caused me a significant amount of trouble. There are a number of ways of skinning this cat, and none of them are great as far as I can see.
On is to set the route not to start automatically, and use a schedule to start the route and then stop it again after a short time using the controlbus EIP. http://camel.apache.org/controlbus.html
I didn't like this approach because I didn't trust that it would drain the queue completely once and only once per trigger.
Another is to use a pollEnrich to query the queue, but that only seems to pick up one item from the queue, but I wanted to completely drain it (only once).
I wrote a custom bean that uses consumer and producer templates to read all the entries in a queue with a specified time-out.
I found an example on the internet somewhere, but it took me a long time to find, and quickly searching again I can't find it now.
So what I have is:
from("timer:myTimer...")
.beanRef( "myConsumerBean", "pollConsumer" )
from("direct:myProcessingRoute")
.to("whatever");
And a simple pollConsumer method:
public void pollConsumer() throws Exception {
if ( consumerEndpoint == null ) consumerEndpoint = consumer.getCamelContext().getEndpoint( endpointUri );
consumer.start();
producer.start();
while ( true ) {
Exchange exchange = consumer.receive( consumerEndpoint, 1000 );
if ( exchange == null ) break;
producer.send( exchange );
consumer.doneUoW( exchange );
}
producer.stop();
consumer.stop();
}
where the producer is a DefaultProducerTemplate, consumer is a DefaultConsumerTemplate, and these are configured in the bean configuration.
This seems to work for me, but if anyone gives you a better answer I'll be very interested to see how it can be done better.

Codenameone NetworkManager hangs after few request

I have been having this issue with the NetworkManager which hangs/stuck forever after few request repeatedly (Url with different parameter still). Mostly it works till the 4th request then on the 5th request it hangs.
Please see the code
ConnectionRequest r = new ConnectionRequest();
r.setUrl(url);
r.setPost(false);
r.setDuplicateSupported(true);
NetworkManager.getInstance().addToQueueAndWait(r); // hangs right here
Reader reader = new InputStreamReader(new ByteArrayInputStream(r.getResponseData()), "UTF-8");
I have read few others had the same issue and I did add setDuplicateSupported(true) still getting the same error.
Any help is really appreciated. I really thank for Shai (from codename one) for being very supportive.
Thanks,
addToQueueAndWait literally waits until the network request finishes and in this case it doesn't finish or timeout. You can set a timeout to a lower value to make this fail more gracefully. I suggest reviewing the request you are making and why it is failing.
I also suggest looking at the network monitor to see what is going on.

store data in every minute what should use Service, AsyncTask

I want to store data in database in every minute . For the same what should I use Service, AsyncTask or anything else. I go through various link which made me more confused .
I read the developer guide and came to know about getWritableDatabase
Database upgrade may take a long time, you should not call this method from the application main thread,
Then first I think I will use AsyncTask then about this
AsyncTasks should ideally be used for short operations (a few seconds at the most.)
After that I think I can use Service then about Service
A Service is not a thread. It is not a means itself to do work off of the main thread (to avoid Application Not Responding errors).
Here I am not able to understand what should I use to store data in database periodically. Please help me here as struck badly.
Thanks in advance
you cant do a lot work on the UI thread, so making database operations you could choose different approaches, few of them that I prefer to use are listed below;
Create a thread pool and execute each database operation via a thread, this reduces load on UI thread, also it never initializes lot of threads.
You can use services for updating the database operations. since services running on UI thread you cant write your operations in Services, so that you have to create a separate thread inside service method. or you can use Intent service directly since it is not working on UI Thread.
here is developer documentation on thread pool in android
and this is the documentation for IntentService
UPDATE
This will send an intent to your service every minute without using any processor time in your activity in between
Intent myIntent = new Intent(context, MyServiceReceiver.class);
PendingIntent pendingIntent = PendingIntent.getBroadcast(context, 0, myIntent, 0);
AlarmManager alarmManager = (AlarmManager)context.getSystemService(Context.ALARM_SERVICE);
Calendar calendar = Calendar.getInstance();
calendar.setTimeInMillis(System.currentTimeMillis());
calendar.add(Calendar.SECOND, 60); // first time
long frequency= 60 * 1000; // in ms
alarmManager.setRepeating(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), frequency, pendingIntent);
Before that check if you really need a service to be started in each minute. or if you can have one service which checks for the data changes in each minute, starting new service would consume maybe more resources than checking itself.
UPDATE 2
private ping() {
// periodic action here.
scheduleNext();
}
private scheduleNext() {
mHandler.postDelayed(new Runnable() {
public void run() { ping(); }
}, 60000);
}
int onStartCommand(Intent intent, int x, int y) {
mHandler = new android.os.Handler();
ping();
return STICKY;
}
this is a simple example like that you can do

Neo4j store is not cleanly shut down; Recovering from inconsistent db state from interrupted batch insertion

I was importing ttl ontologies to dbpedia following the blog post http://michaelbloggs.blogspot.de/2013/05/importing-ttl-turtle-ontologies-in-neo4j.html. The post uses BatchInserters to speed up the task. It mentions
Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
There are some cases where you cannot recover from unclean shutdowns when using the batch inserter api, please note that its package name org.neo4j.unsafe.batchinsert contains the word unsafe for a reason. The intention for batch inserter is to operate as fast as possible.
If you want to guarantee a clean shutdown you should use a try finally:
BatchInserter batch = BatchInserters.inserter(<dir>);
try {
} finally {
batch.shutdown();
}
Another alternative for special cases is registering a JVM shutdown hook. See the following snippet as an example:
BatchInserter batch = BatchInserters.inserter(<dir>);
// do some operations potentially throwing exceptions
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
batch.shutdown();
}
});

Connections with Entity Framework and Transient Fault Handling Block?

We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider

Resources