I have a scenario, where I have to first update a row in three tables, then insert a new row into each of those tables. All this should be in a single batch of statements and rollback if it fails.
Scenario below
e.g
statement1 = update table1;
statement2= update table2;
statement3 =update table3;
statement4 insert into table1;
statement5 insert into table2;
statement6 insert into tables3
The answer to the above question by the Camel Community was to use a Transactional Client, but now the issue is with the transaction not being rolled back on failure of one of the MyBatis statements.
E.g. Exception case:The first two updates were not rolled back on failure of the third one below:
.to("mybatis:userMapper.updatePerson?statementType=Update") --- Passed
.to("mybatis:userMapper.updateCertificate8?statementType=Update") ---- Passed
.to("mybatis:userMapper.updateApplicationGroup?statementType=Update") ---- Failed
**`Am I missing anything?`**
Camel Registry
SimpleRegistry registry = new SimpleRegistry();
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager(
sqlSessionFactory.getConfiguration().getEnvironment()
.getDataSource());
registry.put("transactionManager",dataSourceTransactionManager);
SpringTransactionPolicy springTransactionPolicy = new SpringTransactionPolicy();
springTransactionPolicy.setTransactionManager(dataSourceTransactionManager);
springTransactionPolicy.setPropagationBehaviorName("PROPAGATION_REQUIRED");
registry.put("PROPAGATION_REQUIRED",springTransactionPolicy);
camelContext = new DefaultCamelContext(registry);
camelContext.setTracing(true);
camelContext.start();
Camel Route:
onException(JMSException.class)
.handled(true).maximumRedeliveries(0).end();
onException(IllegalArgumentException.class)
.handled(true).maximumRedeliveries(0).rollback("Rolling back the IllegalArgumentException")
.end();
onException(PersistenceException.class)
.handled(true).maximumRedeliveries(0).rollback("Rolling back the transaction")
.end();
onException(RollbackExchangeException.class)
.handled(false).maximumRedeliveries(0).process(new CamelTibcoMessageProcessor())
.end();
from("timer:foo?period=10000")
.policy("PROPAGATION_REQUIRED") .to("mybatis:userMapper.updatePerson?statementType=Update")
.to("mybatis:userMapper.updateCertificate8?statementType=Update")
.to("mybatis:userMapper.updateApplicationGroup?statementType=Update")
.to("mybatis:userMapper.insertPersonFromCAMSCTSBridge?statementType=InsertList&executorType=batch")
.end();
Figured it out, the solution is straightforward see link for Camel MyBatis and MyBatis-Spring .
Convert the Spring config to Java and you have it. Below is what I did and it worked fine.
Add mybatis-spring maven dependency to your pom.
**------------------Sample setup--------------------**
SimpleRegistry registry = new SimpleRegistry();
ComboPooledDataSource cpds = new ComboPooledDataSource();
cpds.setDriverClass("oracle.jdbc.driver.OracleDriver");
cpds.setJdbcUrl("jdbc_url");
cpds.setUser("username");
cpds.setPassword("password");
TransactionAwareDataSourceProxy transactionAwareDataSourceProxy = new TransactionAwareDataSourceProxy(cpds);
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager(transactionAwareDataSourceProxy);
registry.put("transactionManager",transactionManager);
ApplicationContext appContext = new ClassPathXmlApplicationContext();
SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
factoryBean.setConfigLocation(appContext.getResource("mapper/your_mybatis_config.xml"));
factoryBean.setDataSource(cpds);
SpringTransactionPolicy propagationRequired = new SpringTransactionPolicy();
propagationRequired.setTransactionManager(transactionManager);
propagationRequired.setPropagationBehaviorName("PROPAGATION_REQUIRED");
registry.put("PROPAGATION_REQUIRED",propagationRequired);
SpringTransactionPolicy propagationRequiredNew= new SpringTransactionPolicy();
propagationRequiredNew.setTransactionManager(transactionManager);
propagationRequiredNew.setPropagationBehaviorName("PROPAGATION_REQUIRES_NEW");
registry.put("PROPAGATION_REQUIRES_NEW",propagationRequiredNew);
camelContext = new DefaultCamelContext(registry);
camelContext.setTracing(true);
camelContext.start();
MyBatisComponent component = new MyBatisComponent();
component.setSqlSessionFactory(factoryBean.getObject());
camelContext.addComponent("mybatis", component);
camelContext.addRoutes(new SomeRoute());
Related
I migrated my code from WebApi2 to NET5 and now I have a problem when executing a non-query. In the old code I had:
public void CallSp()
{
var connection = dataContext.GetDatabase().Connection;
var initialState = connection.State;
try
{
if (initialState == ConnectionState.Closed)
connection.Open();
connection.Execute("mysp", commandType: CommandType.StoredProcedure);
}
catch
{
throw;
}
finally
{
if (initialState == ConnectionState.Closed)
connection.Close();
}
}
This was working fine. After I migrated the code, I'm getting the following exception:
BeginExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
So, just before calling Execute I added:
var ct = dataContext.GetDatabase().CurrentTransaction;
var tr = ct.UnderlyingTransaction;
And passed the transaction to Execute. Alas, CurrentTransaction is null, so the above change can't be used.
So then I tried to create a new transaction by doing:
using var tr = dataContext.GetDatabase.BeginTransaction();
And this second change throws a different exception complaining that SqlConnection cannot use parallel transactions.
So, now I'm in a situation where I originally had no problem to having neither an existing transaction nor can I create a new one.
How can I make Dapper happy again?
How can I make Dapper happy again?
Dapper has no opinion here whatsoever; what is unhappy is your data provider. It sounds like somewhere, somehow, your dataContext has an ADO.NET transaction active on the connection. I can't tell you where, how, or why. But: while a transaction is active on a connection, ADO.NET providers tend to be pretty fussy about having that same transaction explicitly specified on all commands that are executed on the connection. This could be because you are somehow sharing the same connection between multiple threads, or it could simply be that something with the dataContext has an incomplete transaction somewhere.
We are using asp.net core 3.x with EF Core 3.x
We do have authorization on couple of entities(so that it only allow few of the records from table returned as response) which is achieved by accessing SQL view (internally joins two table) and we query against that view which will give you only those records which logged in user is authorized to.
In order to achieve this we need to insert logged in user id and ##spid (SQL Server) to the table (Session) (being used in above view) just before executing select queries (on Application table) and we need to delete that record immediately after query is executed. In order to achieve this we are using DbInterceptor.
Session table:
userId
sessionId
1
32
2
26
Application table:
id
userId
text
1
1
I need help to ...
2
2
I don't speak english...
Db interceptor implementation:
public class DbInterceptor : DbCommandInterceptor
{
private readonly IExecutionContext _executionContext;
public DbInterceptor(IExecutionContext executionContext)
{
_executionContext = executionContext;
}
public override async Task<InterceptionResult<DbDataReader>> ReaderExecutingAsync(DbCommand command,
CommandEventData eventData, InterceptionResult<DbDataReader> result,
CancellationToken cancellationToken = new CancellationToken())
{
var sqlParameter = new SqlParameter("UserId",
_executionContext.CurrentPrincipal.FindFirst(Claims.TSSUserId).Value);
await eventData.Context.Database.ExecuteSqlRawAsync("EXEC InsertUserSP #UserId", sqlParameter);
return await base.ReaderExecutingAsync(command, eventData, result);
}
public override async Task<DbDataReader> ReaderExecutedAsync(DbCommand command,
CommandExecutedEventData eventData, DbDataReader result,
CancellationToken cancellationToken = new CancellationToken())
{
var sqlParameter = new SqlParameter("UserId",
_executionContext.CurrentPrincipal.FindFirst(Claims.TSSUserId).Value);
await eventData.Context.Database.ExecuteSqlRawAsync("EXEC DeleteUserSP #UserId", sqlParameter);
return await base.ReaderExecutedAsync(command, eventData, result);
}
}
Now with this we got an exception
System.InvalidOperationException: 'There is already an open DataReader associated with this Command which must be closed first.' on line await eventData.Context.Database.ExecuteSqlRawAsync("EXEC DeleteUserSP #UserId", sqlParameter); in `ReaderExecutedAsync` method of interceptor.
I googled this exception and found that this error can be overcome by providing MultipleActiveResultSets to true in connection string.
Is there any side effect of using MultipleActiveResultSets?
While goggling around that topic, I come across several articles stating that It may share connection instance among different request, when MultipleActiveResultSets is set to true. If same connection is shared among the live request threads, then it can be problematic since authorization is working on the fact that it will have unique ##spid for all running live thread.
How DbContext will be provided with connection instance from Connection pool?
At ReaderExecutedAsync the data reader is still open and fetching rows. So it's too early to unset the user. Try hooking DataReaderDisposing instead.
If that doesn't work, force the connection open and call the procedure outside an interceptor. eg
var con = db.Database.GetDbConnection();
await con.OpenAsync();
await con.Database.ExecuteSqlRawAsync("EXEC InsertUserSP #UserId", sqlParameter);
This will ensure that the connection is not returned to the connection pool until the DbContext is Disposed.
I'm trying to use a TransactionScope with the Npgsql provider.
I found in an old question (provider for PostgreSQL in .net with support for TransactionScope) that Npgsql didn't supported it yet.
Now, after about 5 years, does Npgsql support TransactionScope?
I made a test for myself, using Npgsql 3.0.3 and using the following code:
using (var scope = new TransactionScope())
{
using(var connection = new Npgsql.NpgsqlConnection("server=localhost;user id=*****;password=*****database=test;CommandTimeout=0"))
{
connection.Open();
var command = new NpgsqlCommand(#"insert into test.table1 values ('{10,20,30}', 2);");
command.Connection = connection;
var result = command.ExecuteNonQuery();
// scope.Complete(); <-- Not committed
}
}
Anyone can confirm that Npgsql doesn't support TransactionScope?
EDIT #1
After the confirmation of the support of Npgsql to the TransactionScope, I found that you need to be sure to haev Distribuited Transaction enabled in your PostgreSQL configuration, checking the max_prepared_transactions parameter in the postgres.conf file (remember to restart your server).
EDIT #2
I enabled the Distribuited Transaction on my server but now I got an error using the TransactionScope with Npgsql.
This is my code:
using (var sourceDbConnection = new NpgsqlConnection(SourceConnectionString))
using (var destinationDbConnection = new NpgsqlConnection(DestinationConnectionString))
using (var scope = new TransactionScope())
{
sourceDbConnection.Open();
destinationDbConnection.Open();
Logger.Info("Moving data for the {0} table.", TableName.ToUpper());
var innerStopWatch = new Stopwatch();
innerStopWatch.Start();
foreach (var entity in entities)
{
var etlEntity = new EtlInfoItem
{
MigratedEntityId = category.RowId,
ProjectId = category.ProjectId,
ExecutionDatetime = DateTime.Now
};
// Insert into the destination database
var isRowMigrated = InsertEntity(entity, DestinationSchema, destinationDbConnection);
if (isRowMigrated)
{
// Update the ETL migration table
InsertCompletedEtlMigrationEntity(etlEntity, EtlSchema, sourceDbConnection);
}
else
{
// Update the ETL migration table
InsertFailedEtlMigrationEntity(etlEntity, EtlSchema, sourceDbConnection);
}
}
Logger.Info("Data moved in {0} sec.", innerStopWatch.Elapsed);
Logger.Info("Committing transaction to the source database");
innerStopWatch.Restart();
scope.Complete();
innerStopWatch.Stop();
Logger.Info("Transaction committed in {0} sec.", innerStopWatch.Elapsed);
}
When the TransactionScope exits from the scope (when exiting the using statement), I get a Null Reference Exception with the following stack trace:
Server stack trace:
at Npgsql.NpgsqlConnector.Cleanup()
at Npgsql.NpgsqlConnector.Break()
at Npgsql.NpgsqlConnector.ReadSingleMessage(DataRowLoadingMode dataRowLoadingMode, Boolean returnNullForAsyncMessage)
at Npgsql.NpgsqlConnector.ReadExpectingT
.........
It happens randomly.
Npgsql does support TransactionScope and has done so for quite a while. However, at least for the moment, in order to have your connection participate in the TransactionScope you must either:
Include Enlist=true in your connection string, or
Call NpgsqlConnection.EnlistTransaction
Take a look at the Npgsql unit tests around this for some examples.
I want to enable camel load balancer for multiple datasource. Any one please let me how to enable multiple datasource in camel jdbc endpoint.
Thanks in advance!!
Here is my code. Creating multiple datasource in defaultcamelcontext.
SimpleRegistry simpleregistry = new SimpleRegistry();
Map<String, Object> ds = new HashMap<String, Object>();
ds.put("dataSource", mydataSource);
ds.put("dataSource1", mydataSource1);
simpleregistry.putAll(ds);
Camel camel = CamelExtension.get(system);
DefaultCamelContext defaultCamelContext = camel.context();
defaultCamelContext.setRegistry(simpleregistry);
My route builder pointing to multiple datasource:
from("direct:checkUser").setBody(simple("${body}"))
.loadBalance()
.failover()
.to("jdbc:dataSource?resetAutoCommit=false&outputType=SelectList","jdbc:dataSource1?resetAutoCommit=false&outputType=SelectList");
My requirement is if datasource is down my request need to redirect/pick automatically to datasource1. Please let me how to achieve it.
Separate the to, so they are individual
from("direct:checkUser").setBody(simple("${body}"))
.loadBalance().failover()
.to("jdbc:dataSource?resetAutoCommit=false&outputType=SelectList")
.to("jdbc:dataSource1?resetAutoCommit=false&outputType=SelectList");
I set up a two servers Solr cluster with SolrCloud. Currently I have Master and Replica.
I want to dataimports go to the leader since it doesn't make any sense to make delta-imports on slave (updates wouldn't be distributed to the leader).
From the documentation I get that CloudSolrServer knows cluster state (obtained from Zookeeper) and by default sends all updates only to the leader.
What I want is to make CloudSolrServer to send all dataimport commands to the master. I have the following code:
SolrServer solrServer = new CloudSolrServer("localhost:2181");
ModifiableSolrParams params = new ModifiableSolrParams();
params.set("qt", "/dataimport");
params.set("command", "delta-import");
QueryResponse response = solrServer.query(params);
But I see that the requests still goes to both my servers
localhost:8080 and localhost:8983. Is there any way to fix this?
Just replace your solr server initialization to below
SolrServer solrServer = new CloudSolrServer("zkHost1:port,zkHost2:port");
THis will cause the solr server client to consult zookeeper for solrcloud state.
For more details read CloudSolrServer documentation to init from zookeeper ensemble.
try { CloudSolrServer css = new CloudSolrServer("host1:2181,host2:2181"); css.connect(); ZkStateReader zkSR2 = css.getZkStateReader(); String leader = zkSR2.getLeaderUrl("collection_name", "shard1", 10); } catch (KeeperException e) { } catch (IOException
e) { } catch (InterruptedException e) {}