How to pause a Camel Quartz2 timer in a suspended route? - apache-camel

The following unit test tries a quartz2 route that triggers each second:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.junit.Test;
public class CamelQuartzTest extends CamelTestSupport {
static private String routeId = "test-route";
#Test
public void testSuspendRoute() throws Exception {
// arrange
MockEndpoint mock = getMockEndpoint("mock:result");
// act
System.out.println("context.start()");
context.start();
Thread.sleep(2000);
System.out.println(String.format("receivedCounter = %d", mock.getReceivedCounter()));
System.out.println("context.startRoute()");
context.startRoute(routeId);
Thread.sleep(2000);
System.out.println(String.format("receivedCounter = %d", mock.getReceivedCounter()));
System.out.println("context.suspendRoute()");
context.suspendRoute(routeId);
Thread.sleep(2000);
System.out.println(String.format("receivedCounter = %d", mock.getReceivedCounter()));
System.out.println("context.resumeRoute()");
context.resumeRoute(routeId);
Thread.sleep(2000);
System.out.println(String.format("receivedCounter = %d", mock.getReceivedCounter()));
System.out.println("context.stop()");
context.stop();
System.out.println(String.format("receivedCounter = %d", mock.getReceivedCounter()));
// assert
assertEquals(4, mock.getReceivedCounter());
}
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("quartz2://testtimer?cron=0/1+*+*+?+*+*")
.autoStartup(false)
.routeId(routeId)
.setBody()
.simple("${header.triggerName}: ${header.fireTime}")
.to("mock:result", "stream:out");
}
};
}
}
Result output:
context.start()
receivedCounter = 0
context.startRoute()
testtimer: Tue Oct 21 10:06:38 CEST 2014
testtimer: Tue Oct 21 10:06:39 CEST 2014
receivedCounter = 2
context.suspendRoute()
receivedCounter = 2
context.resumeRoute()
testtimer: Tue Oct 21 10:06:41 CEST 2014
testtimer: Tue Oct 21 10:06:41 CEST 2014
testtimer: Tue Oct 21 10:06:42 CEST 2014
testtimer: Tue Oct 21 10:06:43 CEST 2014
receivedCounter = 6
context.stop()
receivedCounter = 6
After resuming the route, the result shows 4 incoming triggers, while 2 were expected. Apparently, the quartz2 timer keeps firing while the route is suspended. How can I make quartz2 take a pause while the route is suspended?

Found the root cause: if a quartz job is suspended for a while, and resumed again, the default behavior of quartz is to catch up the triggers, aka "misfires", that were missed during the suspended period. I did not find a way the switch off this misfire behavior. However, decreasing the misfire threshold from 60 seconds to 500 ms helped in my case. This can be done by copying the default quartz.properties from quartz-<version>.jar to org/quartz/quartz.properties in the default classpath, and overrule the misfire threshold:
# Properties file for use by StdSchedulerFactory
# to create a Quartz Scheduler Instance.
# This file overrules the default quartz.properties file in the
# quartz-<version>.jar
#
org.quartz.scheduler.instanceName: DefaultQuartzScheduler
org.quartz.scheduler.rmi.export: false
org.quartz.scheduler.rmi.proxy: false
org.quartz.scheduler.wrapJobExecutionInUserTransaction: false
org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount: 10
org.quartz.threadPool.threadPriority: 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread: true
# default threshold: 60 seconds
#org.quartz.jobStore.misfireThreshold: 60000
# overruled threshold: 500 ms, to prevent superfluous triggers after resuming
# a quartz job
org.quartz.jobStore.misfireThreshold: 500
org.quartz.jobStore.class: org.quartz.simpl.RAMJobStore

Related

How to let Flink flush last line to sink when producer(kafka) does not produce new line

when my Flink program is in event time mode, sink will not get last line(say line A). If I feed new line(line B) to Flink, I will get the line A, but I still cann't get the line b.
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE)
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("group.id", "test")
val consumer = new FlinkKafkaConsumer[String]("topic", new SimpleStringSchema(), properties)
val stream: DataStream[String] = env.addSource(consumer).setParallelism(1)
stream.map { m =>
val result = JSON.parseFull(m).asInstanceOf[Some[Map[String, Any]]].get
val msg = result("message").asInstanceOf[String]
val num = parseMessage(msg)
val key = s"${num.zoneId} ${num.subZoneId}"
(key, num, num.onlineNum)
}.filter { data =>
data._2.subZoneId == 301 && data._2.zoneId == 5002
}.assignTimestampsAndWatermarks(new MyTimestampExtractor()).keyBy(0)
.window(TumblingEventTimeWindows.of(Time.seconds(1)))
.allowedLateness(Time.minutes(1))
.maxBy(2).addSink { v =>
System.out.println(s"${v._2.time} ${v._1}: ${v._2.onlineNum} ")
}
class MyTimestampExtractor() extends AscendingTimestampExtractor[(String, OnlineNum, Int)](){
val byMinute = new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:SS")
override def extractAscendingTimestamp(element: (String, OnlineNum, Int)): Long = {
val dateTimeString = element._2.date + " " + element._2.time
val c1 = byMinute.parse(dateTimeString).getTime
if ( element._2.time.contains("22:59") && element._2.subZoneId == 301){
//System.out.println(s"${element._2.time} ${element._1}: ${element._2.onlineNum} ")
// System.out.println(s"${element._2.time} ${c1 - getCurrentWatermark.getTimestamp}")
}
// System.out.println(s"${element._2.time} ${c1} ${c1 - getCurrentWatermark.getTimestamp}")
return c1
}
}
data sample:
01:01:14 5002 301: 29
01:01:36 5002 301: 27
01:02:05 5002 301: 27
01:02:31 5002 301: 29
01:03:02 5002 301: 29
01:03:50 5002 301: 29
01:04:52 5002 301: 29
01:07:24 5002 301: 26
01:09:28 5002 301: 21
01:11:04 5002 301: 22
01:12:11 5002 301: 24
01:13:54 5002 301: 23
01:15:13 5002 301: 22
01:16:04 5002 301: 19 (I can not get this line )
Then I push new line to Flink(via kafka)
01:17:28 5002 301: 15
I will get 01:16:04 5002 301: 19, but 01:17:28 5002 301: 15 may be held in Flink.
this happens because it's event time and the event's timestamp is used to measure the flow of time for windows.
In such case, when only one event is in the window Flink does not know that the window should be omitted. For this reason, when You add next event, the previous window is closed and elements are emitted (in your case 19), but then again next window is created (in your case 15).
Probably the best idea in such case is to add custom ProcessingTimeTrigger which will basically allow You to emit the window after some time has flown, no matter if the events are flowing or not. You can find info about Trigger in the documentation.
What is the final solution, please? I also encountered a similar situation, which can be solved by using new Watermark(System.CurrtTimeMillis()), but it does not seem to fit the purpose of Watermark. Isn't this a common problem, or are application developers deliberately ignoring it and communities ignoring it?
Why not on-time when I consumed kafka message using flink streaming sql group by TUMBLE(rowtime)?
config tableEnv let it emit early:
TableConfig config = bbTableEnv.getConfig();
config.getConfiguration().setBoolean("table.exec.emit.early-fire.enabled", true);
config.getConfiguration().setString("table.exec.emit.early-fire.delay", "1s");

transaction isolation level (READ UNCOMMITTED) of Spring Batch

I want to set transaction isolation level to READ UNCOMMITTED, but is doesn't work.
Here is my job source.
TestJobConfiguration.java
#Slf4j
#Configuration
public class TestJobConfiguration {
#Autowired
private JobBuilderFactory jobBuilders;
#Autowired
private CustomJobExecutionListener customJobExecutionListener;
#Autowired
private JdbcTemplate jdbcTemplate;
#Autowired
private StepBuilderFactory stepBuilders;
#Bean(name = "testJob")
public Job job() {
JobBuilder jobBuilder = jobBuilders.get("testJob").listener(customJobExecutionListener);
Step step = stepBuilders.get("testStep").tasklet(count()).transactionAttribute(transactionAttr())
.build();
return jobBuilder.start(step).build();
}
public TransactionAttribute transactionAttr() {
RuleBasedTransactionAttribute tr = new RuleBasedTransactionAttribute();
tr.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED);
return tr;
}
public Tasklet count() {
return new Tasklet() {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext context) {
StringBuilder countSql = new StringBuilder();
countSql.append(" SELECT COUNT(id)");
countSql.append(" FROM user");
log.debug("test start");
int count = jdbcTemplate.queryForObject(countSql.toString(), Integer.class);
contribution.incrementWriteCount(count);
log.debug("test end");
log.debug("count : {}", count);
return RepeatStatus.FINISHED;
}
};
}
}
I executed below sql statement in Microsoft SQL Server Management Studio, and executed TestJob.
begin tran
delete from user
I expected to complete job, but it stopped at sql execution point.
My log is below.
...
2017-08-29T12:21:23.555+09:00 DEBUG --- [ main] c.p.l.b.rank.job.TestJobConfiguration : test start
When I change my sql statement, countSql.append(" SELECT COUNT(id)"); to countSql.append(" SELECT COUNT(id) WITH (READUNCOMMITTED)"); it works.
...
2017-08-29T13:44:43.692+09:00 DEBUG --- [ main] c.p.l.b.rank.job.TestJobConfiguration : test start
2017-08-29T13:44:43.726+09:00 DEBUG --- [ main] c.p.l.b.rank.job.TestJobConfiguration : test end
2017-08-29T13:44:43.726+09:00 DEBUG --- [ main] c.p.l.b.rank.job.TestJobConfiguration : count : 15178
2017-08-29T13:44:43.747+09:00 INFO --- [ main] c.p.l.b.l.CustomJobExecutionListener :
<!-----------------------------------------------------------------
Protocol for testJob
Started : Tue Aug 29 13:44:43 KST 2017
Finished : Tue Aug 29 13:44:43 KST 2017
Exit-Code : COMPLETED
Exit-Descr. :
Status : COMPLETED
Job-Parameter:
date=2017-08-29 13:44:43 +0900
JOB process time : 0sec
----------------------------------------------------------------->
Why doesn't work isolation level of transaction attribute??

apache-flink: sliding window in output

I'm currently coding a small application to understand the sliding windowing in FLINK (with data input from a APACHE-KAFKA topic):
//Split kafka stream by comma and create tuple
DataStream<Tuple3<String, Integer, Date>> parsedStream = stream
.map((line) -> {
String[] cells = line.split(",");
return new Tuple3(cells[1], Integer.parseInt(cells[4]), f.parse(cells[2]));
});
DataStream<Tuple3<String, Integer, Date>> parsedStreamWithTSWM = parsedStream
.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Integer, Date>>(Time.minutes(1)) {
#Override
public long extractTimestamp(Tuple3<String, Integer, Date> element) {
return element.f2.getTime();
}
});
//Sum values per windows and per id
DataStream<Tuple3<String, Integer, Date>> AggStream = parsedStreamWithTSWM
.keyBy(0)
.window(SlidingEventTimeWindows.of(Time.minutes(30), Time.minutes(1)))
.sum(1);
AggStream.print();
Is it possible to improve my output (AggStream.print();) by adding the window details which produce the aggregation output ?
$ tail -f flink-chapichapo-jobmanager-0.out
(228035740000002,300,Fri Apr 07 14:42:00 CEST 2017)
(228035740000000,28,Fri Apr 07 14:42:00 CEST 2017)
(228035740000002,300,Fri Apr 07 14:43:00 CEST 2017)
(228035740000000,27,Fri Apr 07 14:43:00 CEST 2017)
(228035740000002,300,Fri Apr 07 14:44:00 CEST 2017)
(228035740000000,26,Fri Apr 07 14:44:00 CEST 2017)
(228035740000001,27,Fri Apr 07 14:44:00 CEST 2017)
(228035740000002,300,Fri Apr 07 14:45:00 CEST 2017)
(228035740000000,25,Fri Apr 07 14:45:00 CEST 2017)
Thank you in advance
You can use the generic function apply where you have access to Window info.
public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
/**
* Evaluates the window and outputs none or several elements.
*
* #param key The key for which this window is evaluated.
* #param window The window that is being evaluated.
* #param input The elements in the window being evaluated.
* #param out A collector for emitting elements.
*
* #throws Exception The function may throw exceptions to fail the program and trigger recovery.
*/
void apply(KEY key, W window, Iterable<IN> input, Collector<OUT> out) throws Exception;
}
See docs

Eager loading in Hibernate doesnt work, referenced attrribute has null values

i have a hibernate problem, probably due to lazy/eager loading or something similiar. I have a model class student and a model class century.
The student has an attribute century. Before I save the stdent in the dao i save the referenced century. But when I save the student, the century is null everywhere, except for it's name.
I tried
#Fetch(FetchMode.JOIN),
#ManyToOne(fetch = FetchType.EAGER) and getHibernateTemplate().initialize(student.getCentury());
so far but nothing will work.
Some ideas would be great, thanks in advance!
Student class:
// #Fetch(FetchMode.JOIN)
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "CENTURY_ID")
public Century getCentury() {
return century;
}
StudentDao:
load-method inherited from SuperDao:
#SuppressWarnings("unchecked")
public T load(Long id) {
T object = (T) getSession().get(
classOfData, id);
log.error(String.format("%s with id %d loaded.", CLASS_NAME_OF_DATA, id));
if (object == null) {
log.error("Object not found");
}
return object;
}
save method in StudentDao:
#Override
public Student save(Student student) throws HibernateException {
// getHibernateTemplate().initialize(student.getCentury());
log.error("Saving student:" + student.toString());
Century century = centuryDao.load(student.getCentury().getId());
System.out.println("instudentdao:" + century.getManiple());
centuryDao.save(student.getCentury());
getSession().saveOrUpdate(student);
log.error(String.format("Student with id %d saved.", student.getId()));
return student;
}
StudentAction load method is passed down to Dao:
private void loadStudent(String id) {
create = request.getParameter("mode").equals("create");
edit = request.getParameter("mode").equals("edit");
view = request.getParameter("mode").equals("view");
if (request.getParameter("mode").equals("create"))
student = new Student();
else
student = studentService.load(Long.valueOf(id));
}
log for (fetch = FetchType.EAGER)
2013-11-17 13:15:59,819 ERROR http-bio-8080-exec-9 - <Saving student:Student [id=1, firstName=hans, lastName=wurst, gender=null, city=HAMBURG, postalCode=22123, street=SHEMALESTREET, streetNumber=23, company=Company [id=null, name1=OLE, name2=null, nameShort=null, city=null, postalCode=null, street=null, streetNumber=null, email=null, phone=null, fax=null, status=null, supervisors=null], phone=1234123, email=olko#olko.olko, century=Century [id=null, name=Zenturie, maniple=null, students=null], birthday=Sun Jan 01 00:00:00 CET 2012, birthPlace=MUELLTONNE, userId=123, addressAddition=, status=ACTIVE, supervisor=Supervisor [id=null, firstName=null, lastName=, company=null, email=null, phone=null]]>
log for #JoinColumn(name = "CENTURY_ID")
2013-11-17 13:22:25,266 ERROR http-bio-8080-exec-10 - <Saving student:Student [id=1, firstName=hans, lastName=wurst, gender=null, city=HAMBURG, postalCode=22123, street=SHEMALESTREET, streetNumber=23, company=Company [id=null, name1=OLE, name2=null, nameShort=null, city=null, postalCode=null, street=null, streetNumber=null, email=null, phone=null, fax=null, status=null, supervisors=null], phone=1234123, email=olko#olko.olko, century=Century [id=null, name=Zenturie, maniple=null, students=null], birthday=Sun Jan 01 00:00:00 CET 2012, birthPlace=MUELLTONNE, userId=123, addressAddition=, status=ACTIVE, supervisor=Supervisor [id=null, firstName=null, lastName=, company=null, email=null, phone=null]]>
log for initialize()
2013-11-17 13:25:01,381 ERROR http-bio-8080-exec-10 - <Saving student:Student [id=1, firstName=hans, lastName=wurst, gender=null, city=HAMBURG, postalCode=22123, street=SHEMALESTREET, streetNumber=23, company=Company [id=null, name1=OLE, name2=null, nameShort=null, city=null, postalCode=null, street=null, streetNumber=null, email=null, phone=null, fax=null, status=null, supervisors=null], phone=1234123, email=olko#olko.olko, century=Century [id=null, name=Zenturie, maniple=null, students=null], birthday=Sun Jan 01 00:00:00 CET 2012, birthPlace=MUELLTONNE, userId=123, addressAddition=, status=ACTIVE, supervisor=Supervisor [id=null, firstName=null, lastName=, company=null, email=null, phone=null]]>

Change Date value from one TimeZone to another TimeZone

my case is I have a Date obj the date inside is UTC time. However I want it to be changed to Japan time.
Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("Japan"));
calendar.setTime(someExistingDateObj);
System.out.println(String.valueOf(calendar.get(Calendar.HOUR_OF_DAY)) + ":" + calendar.get(Calendar.MINUTE));
the existingDateObj is mapped from db and db value is 2013-02-14 03:37:00.733
04:37
it seems the timezone is not working?
thanks for your time....
Your problem may be that you're looking at things wrong. A Date doesn't have a time zone. It represents a discrete moment in time and is "intended to reflect coordinated universal time". Calendars and date formatters are what get time zone information. Your second example with the Calendar and TimeZone instances appears to work fine. Right now, this code:
public static void main(String[] args) {
Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("Japan"));
System.out.println(String.valueOf(calendar.get(Calendar.HOUR)) + ":" + calendar.get(Calendar.MINUTE));
}
Reports:
0:32
That appears correct to me. What do you find wrong with it?
Update: Oh, perhaps you're expecting 12:32 from the above code? You'd want to use Calendar.HOUR_OF_DAY instead of Calendar.HOUR for that, or else do some hour math. Calendar.HOUR uses 0 to represent both noon and midnight.
Update 2: Here's my final attempt to try to get this across. Try this code:
public static void main(String[] args) {
Calendar calendar = Calendar.getInstance();
SimpleDateFormat format = new SimpleDateFormat("H:mm a Z");
List<TimeZone> zones = Arrays.asList(
TimeZone.getTimeZone("CST"),
TimeZone.getTimeZone("UTC"),
TimeZone.getTimeZone("Asia/Shanghai"),
TimeZone.getTimeZone("Japan"));
for (TimeZone zone : zones) {
calendar.setTimeZone(zone);
format.setTimeZone(zone);
System.out.println(
calendar.get(Calendar.HOUR_OF_DAY) + ":"
+ calendar.get(Calendar.MINUTE) + " "
+ (calendar.get(Calendar.AM_PM) == 0 ? "AM " : "PM ")
+ (calendar.get(Calendar.ZONE_OFFSET) / 1000 / 60 / 60));
System.out.println(format.format(calendar.getTime()));
}
}
Note that it creates a single Calendar object, representing "right now". Then it prints out the time represented by that calendar in four different time zones, using both the Calendar.get() method and a SimpleDateFormat to show that you get the same result both ways. The output of that right now is:
22:59 PM -6
22:59 PM -0600
4:59 AM 0
4:59 AM +0000
12:59 PM 8
12:59 PM +0800
13:59 PM 9
13:59 PM +0900
If you used Calendar.HOUR instead of Calendar.HOUR_OF_DAY, then you'd see this instead:
10:59 PM -6
22:59 PM -0600
4:59 AM 0
4:59 AM +0000
0:59 PM 8
12:59 PM +0800
1:59 PM 9
13:59 PM +0900
It correctly shows the current times in Central Standard Time (my time zone), UTC, Shanghai time, and Japan time, respectively, along with their time zone offsets. You can see that they all line up and have the correct offsets.
sdf2 and sdf3 are equaly initialized, so there is no need for two of them.

Resources