transactional poller not working with mssqlerver - sql-server

i have a transactional poller (spring 3.1.2 and spring-integration 2.2.3) to ensure that messages will be redelivered if the transaction is rolled back by an exception
<int:outbound-channel-adapter id="obca" ref="notificationHandler"
method="handle" channel="notificationChannel" >
<int:poller max-messages-per-poll="100" fixed-delay="6000"
time-unit="MILLISECONDS">
<int:transactional isolation="DEFAULT" />
</int:poller>
</int:outbound-channel-adapter>
this works fine when unit testing with an H2 database, the message is redelivered as long as the method throws an exception
#Transactional
public void handle(Message<MyPayload> message)
but when i use the sql-server we use in production (ms-sqlserver, which uses READ COMMITTED SNAPSHOT as isolation level if that is important) the redelivery on exception does not work. any hints why this may be?
EDIT:
i tested with TRACE logging but couldn't find suspicious log messages, finally i got it to work. i declared a separate dataSource(same config as normal database) and transactionManager org.springframework.jdbc.datasource.DataSourceTransactionManager instead of my normal org.springframework.orm.jpa.JpaTransactionManager.

Related

Aggregate after exception from ftp consumer: FatalFallbackErrorHandler

My camel route tries to pick up some files from sftp, transfer them to network, and delete them from sftp. If the sftp is unreachable after 3 attempts, I want the route to send an email warning the admin about the problem.
For this reason my sftp address has the following parameters:
maximumReconnectAttempts=2&throwExceptionOnConnectFailed=true&consumer.bridgeErrorHandler=true
In case the network location is not available, i want the route to notify the admin and not delete the files from sftp.
For this reason i have set .handled(false) in onException.
However, when connecting to sftp fails, aggregation also fails and no emails are coming. I have made a minimalist example below:
/configure
onException(Throwable.class)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.handled(false)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving files")
.to(AGGREGATEROUTE)
.end();
from(downloadFrom)
.to(to)
.log(LoggingLevel.INFO, LOG, "XXX - Moving file OK")
.to(AGGREGATEROUTE);
from(AGGREGATEROUTE)
.log(LoggingLevel.INFO, LOG, "XXX - Starting aggregation.")
.aggregate(constant(true), new GroupedExchangeAggregationStrategy())
.completionFromBatchConsumer()
.completionTimeout(10000)
.log(LoggingLevel.INFO, LOG, "XXX - Aggregation completed, sending mail.");
In the logs i see:
16:02| ERROR | CamelLogger.java 156 | XXX - Error moving files
Then the logs for the Exception occurring during connection.
And then this:
16:02| ERROR | FatalFallbackErrorHandler.java 174 | Exception occurred while trying to handle previously thrown exception on exchangeId: ID-LP0641-1552662095664-0-2 using: [Pipeline[[Channel[Log(proefjes.camel_cursus.routebuilders.MoveWithPickupExceptions)[XXX - Error moving files]], Channel[sendTo(direct://aggregate)]]]].
16:02| ERROR | FatalFallbackErrorHandler.java 172 | \--> New exception on exchangeId: ID-LP0641-1552662095664-0-2
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://user#mycompany.nl:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:149)
I do not see "XXX - Starting aggregation." which i would expect to see in the log. Does some kind of error occur befor aggregation? The breakpoint on aggregate(*, *) is never reached.
First, I just want to clarify something. You write "In case the network location is not available, i want the route to notify the admin and not delete the files from sftp", but shouldn't that be obvious anyhow? I mean, if the network location is not available, wouldn't deleting the files from sftp be impossible?
It's a little confusing that your exception handler is also routing .to(AGGREGATEROUTE). Given that you want to email an admin, shouldn't that be in the exception handler, not in the happy path? Why would you and how would you "aggregate" a connection failure?
Finally, and here I think is a real problem with your implementation, you may have misunderstood what handled(false) does. Setting this to false means routing should stop and propagate the exception returned to the caller. I'm not sure what having to the .to(AGGREGATEROUTE) would do in this case, but I'm not surprised it's not being called.
I suggest trying a few things. I don't have your code so I'm not sure which will work best. These are all related and any might work:
Change handled(false) to handled(true).
Replace handled with continued(true).
Use a Dead Letter Channel.
Reference:
Handle and Continue Exceptions
Dead Letter Channel
Since errorhandling is different depending on which endpoint causes the error, i have solved this by having two different versions of onException:
//configure exception on sft end
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.onWhen(new hasSFTPErrorPredicate())
// .continued(true) // tries to connect once, mails and continues to aggregation with empty exchange
//.handled(false) // tries to connect twice but does not reach mail
.handled(true) // tries to connect once, does reach mail
// handled not defined: tries to connect twice but does not reach mail
.log(LoggingLevel.INFO, LOG, "XXX - SFTP exception")
.to(MAIL_ROUTE)
.end();
// exception anywhere else
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving file ${file:name}: ${exception}")
.to(AGGREGATEROUTE)
.handled(false)
.end();
Exceptions occuring at the sftp end are handled in the first onException, because there the hasSFTPErrorPredicate returns 'true'. All this predicate does is check if any exception or their cause has "Cannot connect to sftp:" in the message.
No rollback is required in this case because nothing has happened yet.
Any other exception is handled by the second onException.

Rollback the message to Dead Letter Queue - Apache Camel

I have set up the Apache camel in which i consumes the message from one queue and do some kind of operation on it and then transfers it to other queue .
Now if the exception comes then i want that it should rollback and then after 6 attempts it to send to dead letter queue , Currently rollback happens 5-6 times but my message is not transferred to dead letter queue .
Here What happens -->
Queue1-->(Consumes)-->Operation(Exception thrown)--> rollback --> again Queue1-->(Consumes) --> Operation(Exception thrown)-->rollback -->... this happens 5-6 times and then my message is lost
I dont know where my message is going and why it is getting lost , and from my Active MQ GUI i can see it is dequeued.
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setMaximumRedeliveries(2);
redeliveryPolicy.setMaximumRedeliveryDelay(10000);
redeliveryPolicy.setRedeliveryDelay(10000);
return redeliveryPolicy;
}
---------------------Route extends SpringRouteBuilder-------------------
onException(MyException.class)
.markRollbackOnly()
.redeliveryPolicy(redeliveryPolicy)
.useExponentialBackOff()
.handled(true)
from("jms:queue:Queue1")
.process(new Processor(){
public void process(Exchange ex){
throw new RuntimeException();
}
}).to("jms:queue:myQueue)
I assume there are multiple problems.
markRollbackOnly stops the message. After this statement no further routing is done.
That is the reason why your RedeliveryPolicy and the rest of your onException route is completely ignored. You configure 2 redelivery attempts but you write it does 5 (the default redelivery of ActiveMQ).
To fix this, move markRollbackOnly to the end of your onException route
If you consume transacted from your JMS broker the message must not get lost.
Since you lose it in case of an error, there is a problem with your transaction config. Configure the ActiveMQ component of Camel to use local JMS transactions when consuming.
#Bean(name = "activemq")
#ConditionalOnClass(ActiveMQComponent.class)
public ActiveMQComponent activeMQComponent(ConnectionFactory connectionFactory) {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConnectionFactory(connectionFactory);
activeMQComponent.setTransacted(true);
activeMQComponent.setLazyCreateTransactionManager(false);
return activeMQComponent;
}
When this is in place, you can in fact remove the onException route because the redelivery is done by the JMS broker, so you have to configure the redelivery settings on your JMS connection. If the configured redelivery is exhausted and the message still produces a rollback, it is moved to the DLQ.
Be aware when using an additional onException route because this is pure Camel. The Camel error handler does NOT redeliver on route level, but on processor level. So if you configure both broker and Camel redelivery, it can multiply them.

What causes Memcache operation failed, giving up exception?

I have an app engine java project and am using objectify. I get a stack trace sporadically in the "stack driver error reporting" view of the app engine web console related to putting an item into memcache. This is the code:
try {
TestItem t = new TestItem(...);
ofy().save().entity(t).now();
} catch (Exception e) {
}
and this is the error I'll see sporadically:
com.googlecode.objectify.cache.MemcacheServiceRetryProxy invoke: Memcache operation failed, giving up
java.lang.reflect.InvocationTargetException
at com.google.appengine.runtime.Request.process-i4dx9s2kED3CVcPe(Request.java)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:44)
at com.googlecode.objectify.cache.MemcacheServiceRetryProxy.invoke(MemcacheServiceRetryProxy.java:68)
at com.sun.proxy.$Proxy9.putAll(Unknown Source)
at com.googlecode.objectify.cache.KeyMemcacheService.putAll(KeyMemcacheService.java:91)
at com.googlecode.objectify.cache.EntityMemcache.empty(EntityMemcache.java:319)
at com.googlecode.objectify.cache.CachingAsyncDatastoreService$5.trigger(CachingAsyncDatastoreService.java:445)
at com.googlecode.objectify.cache.TriggerFuture.isDone(TriggerFuture.java:87)
at com.googlecode.objectify.cache.TriggerFuture.get(TriggerFuture.java:102)
at com.googlecode.objectify.impl.ResultAdapter.now(ResultAdapter.java:34)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:22)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:10)
at com.googlecode.objectify.util.ResultTranslator.nowUncached(ResultTranslator.java:21)
at com.googlecode.objectify.util.ResultCache.now(ResultCache.java:30)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:22)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:10)
at com.googlecode.objectify.util.ResultTranslator.nowUncached(ResultTranslator.java:21)
at com.googlecode.objectify.util.ResultCache.now(ResultCache.java:30)
at com.me.test.Test.putSomethinInMemcache(Test.java:13)
...
Caused by: com.google.appengine.api.memcache.MemcacheServiceException: Memcache putAll: Unknown exception setting 1 keys
at com.google.appengine.api.memcache.MemcacheServiceApiHelper$RpcResponseHandler.handleApiProxyException(MemcacheServiceApiHelper.java:69)
at com.google.appengine.api.memcache.AsyncMemcacheServiceImpl$RpcResponseHandlerForPut.handleApiProxyException(AsyncMemcacheServiceImpl.java:349)
at com.google.appengine.api.memcache.MemcacheServiceApiHelper$1.absorbParentException(MemcacheServiceApiHelper.java:111)
at com.google.appengine.api.utils.FutureWrapper.handleParentException(FutureWrapper.java:52)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:91)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:89)
at com.google.appengine.api.memcache.MemcacheServiceImpl.quietGet(MemcacheServiceImpl.java:26)
at com.google.appengine.api.memcache.MemcacheServiceImpl.putAll(MemcacheServiceImpl.java:115)
... 52 more
It doesn't appear to be caught in the try-statement. I just see it in that admin console mentioned earlier.
Does anyone know what this means, or how I can catch it? My main worry is that there could be an old copy of the object stuck in memcache after this operation fails.
Using objectify 5.1.10.
Thanks
This is a get() operation. If memcache is unavailable during a get() operation, Objectify just reads from the datastore. The error is logged and performance suffers somewhat but the app marches on.
It is technically possible to have errors during write operations (any save() clears the cache entry; reads will repopulate the cache). This could in theory leave stale info in the cache. There's nothing that can really be done about this - if you can't clear the cache entry, it's going to be stuck there. My advice is that if you have sensitive data but want it cached, put a reasonable timeout on the cache entry (#Cache(expirationSeconds=60) or whatnot).

Camel ActiveMQ client blocking, temp storage usage immediately hits 100%

I'm seeing 100% utilisation of activemq's temp storage (configured to be 100mb), and the activemq client is blocking. This 100% usage remains permanently, and I have no idea what's going on
I have a camel route, which consumes from a queue (QUEUE.IN) using the JmsTransactionManager.
public final class RouteUnderTest extends RouteBuilder {
#Override
public void configure() throws Exception {
from("activemq-transacted:QUEUE.IN")
.bean(myBean)
.to("activemq:QUEUE.OUT");
}
}
While processing the message from this queue I'm invoking a spring-integration client (myBean) which is configured as follows
<int:gateway id="myBean" service-interface="MyBean">
<int:method name="request" request-channel="channel"/>
</int:gateway>
<int:chain input-channel="channel">
<int:transformer ref="transformedToJsonHere"/>
<jms:outbound-gateway request-destination-name="QUEUE.MYBEAN"
receive-timeout="5000"
explicit-qos-enabled="true"
time-to-live="5000"
delivery-persistent="false"/>
<int:transformer ref="transformedToAnObjectHere"/>
</int:chain>
My broker is configured to use LevelDB, and with the following usage limits:
<persistenceAdapter>
<levelDB directory="${activemq.data}/leveldb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="500 mb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
When my route consumes the message and then attempts to put a non-persistent message on QUEUE.OUT the client is blocked and my broker shows 100% usage of temp storage.
And I see the following activemq logs
2015-07-28 15:44:59,678 | INFO | Usage(default:temp:queue://QUEUE.MYBEAN:temp) percentUsage=0%, usage=104857600, limit=104857600, percentUsageMinDelta=1%;Parent:Usage(default:temp) percentUsage=100%, usage=104857600, limit=104857600, percentUsageMinDelta=1%: Temp Store is Full (0% of 104857600). Stopping producer (ID:orbit-vm-55561-1438094698190-1:1:3:1) to prevent flooding queue://QUEUE.MYBEAN. See http://activemq.apache.org/producer-flow-control.html for more info (blocking for: 1s) | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 6
The queues look like (You can see that the QUEUE.IN message has been not been dequeued because it's still being processed transactionally, and no message has gone to QUEUE.MYBEAN)
I can fix this problem with any one of the following approaches:
Use KahaDB instead of LevelDB
Increase temp storage limit (150MB seems to do it but I haven't experimented a great deal)
Configure tempDataStore in activemq.xml (see below)
When configuring the tempDataStore it looks like:
<tempDataStore>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.leveldb.LevelDBStore">
<property name="directory" value="${activemq.data}/tmp" />
</bean>
</tempDataStore>
I should add, we were using KahaDB previously and this worked fine, but the upgrade to LevelDB has exposed this issue. Reverting to KahaDB is not an option.
I'm hoping someone could explain what we're seeing here, as the results are really difficult to understand. Why does using LevelDB necessitate a higher temp usage limit?, and why does configuring the tempDataStore explicitly also fix the problem?
I don't fully understand what's going on here so I'm worried that simply increasing the temp usage limit a little will just hide the problem until a later date.
Versions:
ActiveMQ: 5.11.1
Camel: 2.14.0
Spring: 4.0.8.RELEASE
Spring Integration: 4.0.5.RELEASE
We ran into exactly the same issue with ActiveMQ 5.13.2
The solution when using LevelDB is to explicitly configure a dedicated tempDataStore as you did.
If not, the broker uses the same store (LevelDB) for both persistent (persistent usage) and non-persistent messages (temp usage). You may therefore end-up in situations where the broker doesn't accept any non-persistent message anymore just because the store already holds persistent ones up to the configured tempUsage limit. It will however accept persistent ones if your storeUsage limit is set higher...
When using KahaDB, the broker automatically uses another store for the non-persistent messages (created in the tmp directory). So you don't have the problem...
Look at the following code for more indepth information: https://github.com/apache/activemq/blob/activemq-5.13.2/activemq-broker/src/main/java/org/apache/activemq/broker/BrokerService.java#L1739
When reading that code, remember LevelDBStore implements PListStore, but KahaDBStore doesn't...

Need help for IBatis?

I am using IBatis for my applicaiton. I am using IBatis 1.6.1 version.I thought it can handle all operations related to DB connections. But I am having little bit concern about this now. Sometimes I am getting the following error details to my log file,
Message
Unable to open connection to "MySQL, MySQL provider 5.0.8.1".
Source
IBatisNet.DataMapper
Stack
at IBatisNet.DataMapper.SqlMapSession.OpenConnection(String connectionString) at IBatisNet.DataMapper.SqlMapSession.OpenConnection() at IBatisNet.DataMapper.Commands.DbCommandDecorator.System.Data.IDbCommand.ExecuteReader() at IBatisNet.DataMapper.MappedStatements.MappedStatement.RunQueryForObject(RequestScope request, ISqlMapSession session, Object parameterObject, Object resultObject) at IBatisNet.DataMapper.MappedStatements.MappedStatement.ExecuteQueryForObject(ISqlMapSession session, Object parameterObject, Object resultObject) at IBatisNet.DataMapper.MappedStatements.MappedStatement.ExecuteQueryForObject(ISqlMapSession session, Object parameterObject) at IBatisNet.DataMapper.SqlMapper.QueryForObject(String statementName, Object parameterObject) at Sunya.VideoStreaming.Persistence.SchoolRepository.GetSchoolDetailsByUrl(String SchoolUrl) in D:\SVN\Sprint104\Persistence\SchoolRepository.cs:line 192 at EduVisionBasePage.GetSchoolUrl(School& _school) at ASP.global_asax.Application_BeginRequest(Object sender, EventArgs e) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
Message
Too many connections
Source
MySql.Data
Stack
at MySql.Data.MySqlClient.MySqlStream.OpenPacket() at MySql.Data.MySqlClient.NativeDriver.Authenticate411() at MySql.Data.MySqlClient.NativeDriver.Authenticate() at MySql.Data.MySqlClient.NativeDriver.Open() at MySql.Data.MySqlClient.MySqlPool.GetPooledConnection() at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at IBatisNet.DataMapper.SqlMapSession.OpenConnection(String connectionString)
Does someone has some idea about the error?
iBATIS has been around for quite a while (2001), and I have my doubts that there's a major bug in the MySQL provider. You mentioned that the error occurs "sometimes" - if it's intermittent, perhaps the problem lies elsewhere. "Unable to open connection" is likely to mean just that. Network errors, database outages, too many unreleased connections, authentication issues, etc. might all cause this error.
If you are using the JDBC transaction manager set the datasource to 'UNPOOLED' in you myBatisConfig.xml
<transactionManager type="JDBC"/>
<dataSource type="UNPOOLED">
.
.
.
</dataSource>
</environment>
That is the short answer. My hunch is The connection pool class is holding connections too long. Lowering the values for 'poolMaximunActiveConnection' and 'poolMaximunIdleConnections' may solve the problem.
Notes: using myBatis 3.0.3 Java, mySql Windows 5.5.9
search terms: iBatis, myBatis, 'Too many connections', mySQL

Resources