I'm using Flink CEP and I need to handle even events that do not generate alerts. Please how can I do it?
I'm consuming events from rabbitMq and have defined some patterns. Now what I need to do is to send all the events received in another Queue to a distant API. I'm new at Flink so I followed the example in the documentation. When I try to send them after matching the received events with the patterns defined I only get those how match with the patterns. What I want to do is just to put an attribute to true in my events for example and send them all to the output queue.
public static void cep() throws Exception {
/**
* RabbitMQ connection
*/
final RMQConnectionConfig connectionConfig = new RMQConnectionConfig.Builder()
.setHost(HOST)
.setPort(PORTS[RD.getValue()])
.setUserName("guest")
.setPassword("guest")
.setVirtualHost("/")
.build();
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
/**
* Retrieve data inputEventstream from rabbitMQ
*/
final DataStream<String> inputEventstream = env
.addSource(new RMQSource<>(
connectionConfig, // config for the RabbitMQ connection
"input", // name of the RabbitMQ queue to consume
true, // use correlation ids; can be false if only at-least-once is required
new SimpleStringSchema())) // deserialization schema to turn messages into Java objects
.setParallelism(1);
/**
* Change DataStream<String> to DataStream<MonitoringEvent> where
* MonitoringEvent refer to a class which modelize our event.
*/
DataStream<MonitoringEvent> inputEventStreamClean = inputEventstream.flatMap(new Tokenizer());
Pattern<MonitoringEvent, ?> warningPattern = Pattern.<MonitoringEvent>begin("start")
.subtype(MonitoringEvent.class)
.where(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return Integer.parseInt(value.getAncienneChute())>=CHUTE_GRAVE;
}
}).or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return value.isChaiseRoulante();
}
}).or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return value.isDeambulateur();
}
}).or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return value.isDeambulateur();
}
})
.or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return EntityManager.getInstance().hasCurrentYearFallTwice(value.getIdClient());
}
});
//PatternStream<MonitoringEvent> fallPatternStream = CEP.pattern(inputEventStreamClean.keyBy("idClient"), warningPattern);
inputEventStreamClean.print();
// Create a pattern stream from our warning pattern
PatternStream<MonitoringEvent> tempPatternStream = CEP.pattern(
inputEventStreamClean.keyBy("idClient"),
warningPattern);
DataStream<FallWarning> warnings = tempPatternStream.select(
(Map<String, List<MonitoringEvent>> pattern) -> {
MonitoringEvent first = (MonitoringEvent) pattern.get("start").get(0);
return new FallWarning(first.getIdClient(), Integer.valueOf(first.getAncienneChute()));
}
);
// Alert pattern: Two consecutive temperature warnings appearing within a time interval of 20 seconds
Pattern<FallWarning, ?> alertPattern = Pattern.<FallWarning>begin("start");
// Create a pattern stream from our alert pattern
PatternStream<FallWarning> alertPatternStream = CEP.pattern(
//warnings.keyBy("idClient"),
warnings,
alertPattern);
// Generate alert
DataStream<Alert> alerts = alertPatternStream.flatSelect(
(Map<String, List<FallWarning>> pattern, Collector<Alert> out) -> {
FallWarning first = pattern.get("start").get(0);
if (first.idNiveauUrgence>=CHUTE_GRAVE && (first.isChaiseRoulante() || first.isDeambulateur() || first.isFracture())) {
out.collect(new Alert(first.idClient));
}
});
// Print the warning and alert events to stdout
warnings.print();
alerts.print(); // here I can send them to RabbitMq
env.execute();
}
You just need to add a Sink to your "alert" DataStream like
alert.addSink(new RMQSink<String>(
connectionConfig, // config for the RabbitMQ connection
"queueName", // name of the RabbitMQ queue to send messages to
new SimpleStringSchema())); // serialization schema to turn Java objects to messages
per example at
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/connectors/rabbitmq.html
Related
I have one AsyncPeriodicBackgroundWorkerBase base class(DataValidateWorker) which runs 1 minute interval.
I need to send the data I get from the DB to a third party web service and update the results in the db. A Web service response arrives in about 30-40 seconds. For this reason, I need to send Web service queries simultaneously, not sequentially.
For this reason, I wrote code in accordance with parallel programming as seen below. I cannot pull the database connection for the Task I wrote. DB connection closed, I got many errors like Executing.
How can I create the db connection for my Task?
Would it be better to write this job in an external application (exe or service) instead of ABP?
public class DataValidateWorker : AsyncPeriodicBackgroundWorkerBase
{
private readonly IUnitOfWorkManager _unitOfWorkManager;
private readonly IDataFilter _dataFilter;
public DataValidateWorker(AbpAsyncTimer timer, IServiceScopeFactory serviceScopeFactory, IDataFilter dataFilter, IUnitOfWorkManager unitOfWorkManager) : base(timer, serviceScopeFactory)
{
_dataFilter = dataFilter;
_unitOfWorkManager = unitOfWorkManager;
Timer.Period = 60 * 1000; // 60 seconds
}
[UnitOfWork]
protected async override Task DoWorkAsync(PeriodicBackgroundWorkerContext workerContext)
{
try
{
var notificationValidationRepository = workerContext.ServiceProvider.GetRequiredService<IRepository<NotificationValidation, int>>();
var notificationValidationItems = await notificationValidationRepository.GetQueryableAsync();
List<NotificationValidation> list = new List<NotificationValidation>();
using (var uow = _unitOfWorkManager.Begin())
{
using (_dataFilter.Disable<IMultiTenant>())
{
list = notificationValidationItems.Where(x => x.RecordDateTime <= DateTime.Now && x.ValidationResult == (int)ValidationResult.NotStarted).ToList();
}
}
NotificationValidationArgs jobArgs = new NotificationValidationArgs();
foreach (var item in list)
{
jobArgs.notificationValidationId = item.Id;
Task taskA = Task.Factory.StartNew(async (Object obj) =>
{
// doing some third party web service operations and db operations
}, jobArgs);
}
}
catch (Exception ex)
{
Logger.LogCritical(2001, ex, DateTime.Now.ToString() + " -> DataValidateWorker -> try 1 -> RDMS uow");
}
}
}
You don't await any of tasks, so lifetime of object ends while your task is still running.
Try to store all of the tasks in a collection and await them before method execution finishes.
Something like below:
public class DataValidateWorker : AsyncPeriodicBackgroundWorkerBase
{
public DataValidateWorker(AbpAsyncTimer timer, IServiceScopeFactory serviceScopeFactory) : base(timer, serviceScopeFactory)
{
}
protected override async Task DoWorkAsync(PeriodicBackgroundWorkerContext workerContext)
{
var tasks = new List<Task>();
foreach (var item in list)
{
tasks.Add(YourLongJob(arg)); // don't await here. collect in a collection
}
await Task.WhenAll(tasks); // wait until all of them is completed.
}
private async Task YourLongJob(object arg)
{
await Task.Delay(30 * 1000); // a long job
}
}
I'm learning Activiti 7, I drew a BPMN diagram as below:
When the highlight1 UserTask has been completed but the highlight2 UserTask is still pending, I ran the following code to highlight the completed flow element.
private AjaxResponse highlightHistoricProcess(#RequestParam("instanceId") String instanceId,
#AuthenticationPrincipal UserInfo userInfo) {
try {
// Get the instance from the history table
HistoricProcessInstance instance = historyService
.createHistoricProcessInstanceQuery().processInstanceId(instanceId).singleResult();
BpmnModel bpmnModel = repositoryService.getBpmnModel(instance.getProcessDefinitionId());
Process process = bpmnModel.getProcesses().get(0);
// Get all process elements, including sequences, events, activities, etc.
Collection<FlowElement> flowElements = process.getFlowElements();
Map<String, String> sequenceFlowMap = Maps.newHashMap();
flowElements.forEach(e -> {
if (e instanceof SequenceFlow) {
SequenceFlow sequenceFlow = (SequenceFlow) e;
String sourceRef = sequenceFlow.getSourceRef();
String targetRef = sequenceFlow.getTargetRef();
sequenceFlowMap.put(sourceRef + targetRef, sequenceFlow.getId());
}
});
// Get all historical Activities, i.e. those that have been executed and those that are currently being executed
List<HistoricActivityInstance> actList = historyService.createHistoricActivityInstanceQuery()
.processInstanceId(instanceId)
.list();
// Each history Activity is combined two by two
Set<String> actPairSet = new HashSet<>();
for (HistoricActivityInstance actA : actList) {
for (HistoricActivityInstance actB : actList) {
if (actA != actB) {
actPairSet.add(actA.getActivityId() + actB.getActivityId());
}
}
}
// Highlight Link ID
Set<String> highSequenceSet = Sets.newHashSet();
actPairSet.forEach(actPair -> {
logger.info("actPair:{}, seq:{}", actPair, sequenceFlowMap.get(actPair));
highSequenceSet.add(sequenceFlowMap.get(actPair));
logger.info("{}",highSequenceSet.toString());
});
// Get the completed Activity
List<HistoricActivityInstance> finishedActList = historyService
.createHistoricActivityInstanceQuery()
.processInstanceId(instanceId)
.finished()
.list();
// Highlight the completed Activity
Set<String> highActSet = Sets.newHashSet();
finishedActList.forEach(point -> highActSet.add(point.getActivityId()));
// Get the pending highlighted node, i.e. the currently executing node
List<HistoricActivityInstance> unfinishedActList = historyService
.createHistoricActivityInstanceQuery()
.processInstanceId(instanceId)
.unfinished()
.list();
Set<String> unfinishedPointSet = Sets.newHashSet();
unfinishedActList.forEach(point -> unfinishedPointSet.add(point.getActivityId()));
...
return AjaxResponse.ajax(ResponseCode.SUCCESS.getCode(),
ResponseCode.SUCCESS.getDesc(),
null);
} catch (Exception e) {
e.printStackTrace();
return AjaxResponse.ajax(ResponseCode.ERROR.getCode(),
"highlight failure",
e.toString());
}
}
Please see this piece of code:
// Highlight Link ID
Set<String> highSequenceSet = Sets.newHashSet();
actPairSet.forEach(actPair -> {
logger.info("actPair:{}, seq:{}", actPair, sequenceFlowMap.get(actPair));
highSequenceSet.add(sequenceFlowMap.get(actPair));
logger.info("{}",highSequenceSet.toString());
});
It was expected to get 2 elements in the highSequenceSet, but it got 3, with a unexpected null.
The log printed in the console was:
Why is the first null added to the HashSet but not the rest?
Why is the first null added to the HashSet but not the rest?
HashSet implements the Set interface, duplicate values are not allowed.
I have a process that must check the INBOX on GMail for a failure message, it's working except for the problem of the time it takes to connect and check the message, it takes about 1 minute, that is too much time.
My code:
public static SendResult sendingSuccess(final String email) {
SendResult result = new SendResult();
try {
Properties props = new Properties();
props.setProperty("mail.store.protocol", "imaps");
props.setProperty("mail.imap.com", "993");
props.setProperty("mail.imap.connectiontimeout", "5000");
props.setProperty("mail.imap.timeout", "5000");
Session session = Session.getDefaultInstance(props);
Store store = session.getStore("imaps");
store.connect("imap.googlemail.com", 993, GMAIL_USER, GMAIL_PASSWORD);
// Select and open folder
Folder inbox = store.getFolder("INBOX");
inbox.open(Folder.READ_WRITE);
// What to search for
SearchTerm searchTerm = new SearchTerm() {
private static final long serialVersionUID = -7187666524976851520L;
public boolean match(Message message) {
try {
String content = getContent(message);
boolean daemon = (message.getFrom()[0].toString()).contains("mailer-daemon#googlemail.com");
boolean failure = message.getSubject().contains("Failure");
boolean foundWarning = content.contains(email);
if (daemon && failure && foundWarning) {
return true;
}
} catch (Exception ex) {
ex.printStackTrace();
}
return false;
}
};
// Fetch unseen messages from inbox folder
Message[] messages = inbox.search(searchTerm);
// If there is no message then it's OK
result.setStatus(messages.length == 0);
result.setMessage(result.isStatus() ? "No failure message found for " + email : "Failure message found for " + email);
// Flag message as DELETED
for (Message message : messages) {
message.setFlag(Flags.Flag.DELETED, true);
}
// disconnect and close
inbox.close(false);
store.close();
} catch (Exception ex) {
result.setMessage(ex.getMessage());
ex.printStackTrace();
}
return result;
}
When I run this code to query the failure message it takes more than 1 minute to return the result to me.
======= Checking Gmail account for message failure! =====
Start...: 09:00:33
Finish..: 09:01:01
Result..: SendResult [status=true, message=No failure found for wrong.user#gmxexexex.net]
Is there any way to reduce this time?
The problem is most likely because you've written your own search term. JavaMail doesn't know how to translate your search term into an IMAP SEARCH request so it executes the search on the client, which requires downloading all the messages to the client to search there. Try this instead:
SearchTerm searchTerm = new AndTerm(new SearchTerm[] {
new FromStringTerm("mailer-daemon#googlemail.com"),
new SubjectTerm("Failure"),
new BodyTerm(email)
});
That will allow the search to be done by the IMAP server.
I am using reactive-kafka-core 0.10.1 (targeting Kafka 0.9.x). It looks like Kafka producer actor is stopped whenever an error is encountered from the callback function. Is there any way to customize this behavior? Our use case is to try to recover and resend the messages.
private def processElement(element: ProducerMessage[K, V]) = {
val record = richProducer.props.partitionizer(element.value) match {
case Some(partitionId) => new ProducerRecord(richProducer.props.topic, partitionId, element.key, element.value)
case None => new ProducerRecord(richProducer.props.topic, element.key, element.value)
}
richProducer.producer.send(record, new Callback {
override def onCompletion(metadata: RecordMetadata, exception: Exception) = {
if (exception != null) {
handleError(exception)
}
}
})
()} private def handleError(ex: Throwable) = {
log.error(ex, "Stopping Kafka subscriber due to fatal error.")
stop()
}
I created an Android project using Google Cloud Endpoints, I created a model class Poll.java and now I want to make a query in the PollEndpoint.java class, to retrieve a poll with a specific author.
This is the query code in PollEndpoint.java
#ApiMethod(name = "getSpecificPoll", path="lastpoll")
public Poll getSpecificPoll(#Named("creator") String creator) {
EntityManager mgr = getEntityManager();
Poll specificPoll = null;
try {
Query query = mgr.createQuery("select from Poll where creator
='"+creator+"'");
specificPoll = (Poll) query.getSingleResult();
} finally {
mgr.close();
}
return specificPoll;
}
The code in the client part is:
private class PollQuery extends AsyncTask<Void, Void, Poll> {
#Override
protected Poll doInBackground(Void... params) {
Poll pollQuery = new Poll();
Pollendpoint.Builder builderQuery = new Pollendpoint.Builder(
AndroidHttp.newCompatibleTransport(), new
JacksonFactory(),null);
builderQuery = CloudEndpointUtils.updateBuilder(builderQuery);
Pollendpoint endpointQuery = builderQuery.build();
try {
pollQuery =
endpointQuery.getSpecificPoll("Bill").execute();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (pollQuery != null){
System.out.println(pollQuery.getKeyPoll().getId());
} else System.out.println("Null query");
return null;
}
The problem is that the server throw an exception:
javax.persistence.PersistenceException: FROM clause of query has class com.development.pollmeproject.Poll but no alias
at org.datanucleus.api.jpa.NucleusJPAHelper.getJPAExceptionForNucleusException(NucleusJPAHelper.java:302)
I think that the query statement is not correct, how can I write a correct one?
The query you provided is NOT valid JPQL. JPQL is more of the form
SELECT p FROM Poll p WHERE p.creator = :creatorParam
The error message does tell you that though, so I'm not sure why you're not sure of it