MDB exception handling - prepared-statement

I have an MDB that calls some methods in its onMessage() method.Inside the catch block of onMessage() I run an update query using preparestatement.I notice that if the flow reaches the catch block it does not commit the update statement.Is it not possible to do that inside the catch block of an MDB?My onMessage() methos is like this
public void onMessage(Message message) {
try{
someMethod()
}
catch(Throwable o)
{
someUpdateMethod()//update query runs here
}
}

Whatever threw the exception caused the transaction context into rollback only mode.
When onMessage returns all transactional resources will be called to rollback. This includes the datasource used for the prepared statement in someUpdateMethod().
To get the update committed it must be executed in a separate transaction. Do this by calling another stateless session bean with the #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) on the method.
#MessageDriven(...
public class MyMdb inpelements MessageListener
#EJB
Updater updater;
#Override
public void onMessage(Message message) {
try {
someMethod();
}
catch(Throwable o) {
updater.someUpdateMethod();
}
}
}
The stateless session EJB that executes the update in a separate transaction.
#Stateless
public class Updater {
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public String someUpdateMethod() {
//update query runs here
}
}

Related

Use Cases of Flink CheckpointedFunction

While going through the Flink official documentation, I came across CheckpointedFunction.
Wondering why and when would you use this function. I am currently working on a stateful Flink job that heavily relies on ProcessFunction to save state in RocksDB. Just wondering if CheckpointedFunction is better than the ProcessFunction.
CheckpointedFunction is for cases where you need to work with state that should be managed by Flink and included in checkpoints, but where you aren't working with a KeyedStream and so you cannot use keyed state like you would in a KeyedProcessFunction.
The most common use cases of CheckpointedFunction are in sources and sinks.
In addition to the answer of #David I have another use case in which I don't use CheckpointedFunction with the source or sink. I do use it in a ProcessFunction where I want to count (programmatically) how many times my job has restarted. I use MyProcessFunction and CheckpointedFunction and I update ListState<Long> restarts when the job restarts. I use this state on the integration tests to ensure that the job was restarted upon a failure. I based my example on the Flink checkpoint example for Sinks.
public class MyProcessFunction<V> extends ProcessFunction<V, V> implements CheckpointedFunction {
...
private transient ListState<Long> restarts;
#Override
public void snapshotState(FunctionSnapshotContext context) throws Exception { ... }
#Override
public void initializeState(FunctionInitializationContext context) throws Exception {
restarts = context.getOperatorStateStore().getListState(new ListStateDescriptor<Long>("restarts", Long.class));
if (context.isRestored()) {
List<Long> restoreList = Lists.newArrayList(restarts.get());
if (restoreList == null || restoreList.isEmpty()) {
restarts.add(1L);
System.out.println("restarts: 1");
} else {
Long max = Collections.max(restoreList);
System.out.println("restarts: " + max);
restarts.add(max + 1);
}
} else {
System.out.println("restarts: never restored");
}
}
#Override
public void open(Configuration parameters) throws Exception { ... }
#Override
public void processElement(V value, Context ctx, Collector<V> out) throws Exception { ... }
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<V> out) throws Exception { ... }
}

Spring #Transactional does not begin new transaction on MS SQL

I'm having trouble with transactions in Spring Boot using #Transactional annotation. The latest Spring is connected to a MS SQL Database.
I have following service, which periodically executes transactional method according to some criteria:
#Service
public class SomeService {
SomeRepository repository;
public SomeService(SomeRepository someRepository) {
this.repository = someRepository;
}
#Scheduled(fixedDelayString="${property}") //10 seconds
protected scheduledIteration() {
if(something) {
insertDataInNewTransaction(getSomeData());
}
}
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
}
The algorithm supposed to process data, insert them into table and perform check (db procedure). If the procedure throws an exception, the transaction should be rollbacked. I'm sure, that the procedure does not perform commit of the transaction.
The problem I'm facing is, that calling the method does not begin new transaction (or does but it's auto-commited), because I've tried following:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
int counter = 0;
for(Data d : data) {
repository.save(d);
counter++;
//test
if(counter == 10) {
throw new Exception("test");
}
}
}
After the test method is executed, the first 10 rows remain in the table, where they were supposed to be rollbacked. During debugging I've noticed, that calling repository.save() in the loop inserts to the table outside transaction, because I can see the row from DB IDE while debugger sitting on next row. This gave me an idea, that the problem is caused by auto-commit, as it's MS SQL default. So I have tried to add following properties, but without any difference:
spring.datasource.hikari.auto-commit=false
spring.datasource.auto-commit=false
Is there anything I'm doing wrong?
If you use Spring Proxy AOP, then you need to turn the method insertDataInNewTransaction as public.
Remember that if the method is public, but it is invoked from the same bean, it will not create a new transaction (because spring proxies won't be call).
Short answer:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
But if you really need a new separate transaction use Propagation.REQUIRES_NEW instead of Propagation.REQUIRED.

Flink Jdbc sink

I have created an application where I read data from Kinesis streams and sink the data into mysql table.
I tried to load test the app. For 100k entries it takes more than 3 hours. Any suggestion why it's happening so slow. One more thing is the primary key of my table consist of 7 columns.
I am using hibernate to store POJOs directly into database.
code :
public class JDBCSink extends RichSinkFunction<CompetitorConfig> {
private static SessionFactory sessionFactory;
private static StandardServiceRegistryBuilder serviceRegistry;
private Session session;
private String username;
private String password;
private static final Logger LOG = LoggerFactory.getLogger(CompetitorConfigJDBCSink.class);
public JDBCSink(String user, String pass) {
username = user;
password = pass;
}
public static void configureHibernateUtil(String username, String password) {
try {
Properties prop= new Properties();
prop.setProperty("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
prop.setProperty("hibernate.connection.driver_class", "com.mysql.cj.jdbc.Driver");
prop.setProperty("hibernate.connection.url", "url");
prop.setProperty("hibernate.connection.username", username);
prop.setProperty("hibernate.connection.password", password);
org.hibernate.cfg.Configuration configuration = new org.hibernate.cfg.Configuration().addProperties(prop);
configuration.addAnnotatedClass(CompetitorConfig.class);
serviceRegistry = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties());
sessionFactory = configuration.buildSessionFactory(serviceRegistry.build());
} catch (Throwable ex) {
throw new ExceptionInInitializerError(ex);
}
}
#Override
public void open(Configuration parameters) throws Exception {
configureHibernateUtil(username,password);
this.session = sessionFactory.openSession();
}
#Override
public void invoke(CompetitorConfig value) throws Exception {
Transaction transaction = null;
try {
transaction = session.beginTransaction();
session.merge(value);
session.flush();
} catch (Exception e) {
throw e;
} finally {
transaction.commit();
}
}
#Override
public void close() throws Exception {
this.session.close();
sessionFactory.close();
}
}
This is slow because you writing each record individually, wrapped in its own transaction. A high performance database sink will do buffered, bulk writes, and commit transactions as part of checkpointing.
If you need exactly once guarantees and can be satisfied with upsert semantics, you can use FLINK's existing JDBC sink. If you require two-phase commit, that's already been merged to master, and will be included in Flink 1.13. See FLINK-15578.
Update:
There's no standard SQL syntax for an upsert; you'll need to figure out if and how your database supports this. For example:
MySQL:
INSERT ... ON DUPLICATE KEY UPDATE ...
PostgreSQL:
INSERT ... ON CONFLICT ... DO UPDATE SET ...
For what it's worth, applications like this are generally easier to implement using Flink SQL. In that case you would use the Kinesis table connector together with the JDBC table connector.

Update external Database in RichCoFlatMapFunction

I have a RichCoFlatMapFunction
DataStream<Metadata> metadataKeyedStream =
env.addSource(metadataStream)
.keyBy(Metadata::getId);
SingleOutputStreamOperator<Output> outputStream =
env.addSource(recordStream)
.assignTimestampsAndWatermarks(new RecordTimeExtractor())
.keyBy(Record::getId)
.connect(metadataKeyedStream)
.flatMap(new CustomCoFlatMap(metadataTable.listAllAsMap()));
public class CustomCoFlatMap extends RichCoFlatMapFunction<Record, Metadata, Output> {
private transient Map<String, Metadata> datasource;
private transient ValueState<String, Metadata> metadataState;
#Inject
public void setDataSource(Map<String, Metadata> datasource) {
this.datasource = datasource;
}
#Override
public void open(Configuration parameters) throws Exception {
// read ValueState
metadataState = getRuntimeContext().getState(
new ValueStateDescriptor<String, Metadata>("metadataState", Metadata.class));
}
#Override
public void flatMap2(Metadata metadata, Collector<Output> collector) throws Exception {
// if metadata record is removed from table, removing the same from local state
if(metadata.getEventName().equals("REMOVE")) {
metadataState.clear();
return;
}
// update metadata in ValueState
this.metadataState.update(metadata);
}
#Override
public void flatMap1(Record record, Collector<Output> collector) throws Exception {
Metadata metadata = this.metadataState.value();
// if metadata is not present in ValueState
if(metadata == null) {
// get metadata from datasource
metadata = datasource.get(record.getId());
// if metadata found in datasource, add it to ValueState
if(metadata != null) {
metadataState.update(metadata);
Output output = new Output(record.getId(), metadataState.getName(),
metadataState.getVersion(), metadata.getType());
if(metadata.getId() == 123) {
// here I want to update metadata into another Database
// can I do it here directly ?
}
collector.collect(output);
}
}
}
}
Here, in flatmap1 method, I want to update a database. Can I do that operation in flatmap1, I am asking this because it involves some wait time to query DB and then update db.
While it in principle it is possible to do this, it's not a good idea. Doing synchronous i/o in a Flink user function causes two problems:
You are tying up considerable resources that are spending most of their time idle, waiting for a response.
While waiting, that operator is creating backpressure that prevents checkpoint barriers from making progress. This can easily cause occasional checkpoint timeouts and job failures.
It would be better to use a KeyedCoProcessFunction instead, and emit the intended database update as a side output. This can then be handled downstream either by a database sink or by using a RichAsyncFunction.

How to cancel delete using AbstractMongoEventListener?

Can I cancel delete event using onBeforeDelete method of MongoGenreCancelDeleteEventsListener? If yes then how?
#Component
public class MongoGenreCancelDeleteEventsListener extends AbstractMongoEventListener<Genre> {
private final BookRepository bookRepository;
#Override
public void onBeforeDelete(BeforeDeleteEvent<Genre> event) {
super.onBeforeDelete(event);
// check conditions and cancel delete
}
}
I know this is an old question, but I had the same issue and I found a solution.
Basically, your code should become like this:
#Component
public class MongoGenreCancelDeleteEventsListener extends AbstractMongoEventListener<Genre> {
private BookRepository bookRepository;
#Override
public void onBeforeDelete(BeforeDeleteEvent<Genre> event) {
super.onBeforeDelete(event);
boolean abort = false;
//do your check and set abort to true if necessary
if (abort) {
throw new IllegalStateException(); //or any exception you like
}
}
}
The thrown exception prevents the operation from going further and it stops there. Also, the exception gets propagated to the caller (anyway, it is wrapped inside a UndeclaredThrowableException, so this is what you should catch; make sure that the wrapped exception is indeed the one you've thrown by calling the getCause() method).
Hope it helps.

Resources