error handling in Akka Kafka Producer - akka-stream

I am using reactive-kafka-core 0.10.1 (targeting Kafka 0.9.x). It looks like Kafka producer actor is stopped whenever an error is encountered from the callback function. Is there any way to customize this behavior? Our use case is to try to recover and resend the messages.
private def processElement(element: ProducerMessage[K, V]) = {
val record = richProducer.props.partitionizer(element.value) match {
case Some(partitionId) => new ProducerRecord(richProducer.props.topic, partitionId, element.key, element.value)
case None => new ProducerRecord(richProducer.props.topic, element.key, element.value)
}
richProducer.producer.send(record, new Callback {
override def onCompletion(metadata: RecordMetadata, exception: Exception) = {
if (exception != null) {
handleError(exception)
}
}
})
()} private def handleError(ex: Throwable) = {
log.error(ex, "Stopping Kafka subscriber due to fatal error.")
stop()
}

Related

ABP framework / parallel programming

I have one AsyncPeriodicBackgroundWorkerBase base class(DataValidateWorker) which runs 1 minute interval.
I need to send the data I get from the DB to a third party web service and update the results in the db. A Web service response arrives in about 30-40 seconds. For this reason, I need to send Web service queries simultaneously, not sequentially.
For this reason, I wrote code in accordance with parallel programming as seen below. I cannot pull the database connection for the Task I wrote. DB connection closed, I got many errors like Executing.
How can I create the db connection for my Task?
Would it be better to write this job in an external application (exe or service) instead of ABP?
public class DataValidateWorker : AsyncPeriodicBackgroundWorkerBase
{
private readonly IUnitOfWorkManager _unitOfWorkManager;
private readonly IDataFilter _dataFilter;
public DataValidateWorker(AbpAsyncTimer timer, IServiceScopeFactory serviceScopeFactory, IDataFilter dataFilter, IUnitOfWorkManager unitOfWorkManager) : base(timer, serviceScopeFactory)
{
_dataFilter = dataFilter;
_unitOfWorkManager = unitOfWorkManager;
Timer.Period = 60 * 1000; // 60 seconds
}
[UnitOfWork]
protected async override Task DoWorkAsync(PeriodicBackgroundWorkerContext workerContext)
{
try
{
var notificationValidationRepository = workerContext.ServiceProvider.GetRequiredService<IRepository<NotificationValidation, int>>();
var notificationValidationItems = await notificationValidationRepository.GetQueryableAsync();
List<NotificationValidation> list = new List<NotificationValidation>();
using (var uow = _unitOfWorkManager.Begin())
{
using (_dataFilter.Disable<IMultiTenant>())
{
list = notificationValidationItems.Where(x => x.RecordDateTime <= DateTime.Now && x.ValidationResult == (int)ValidationResult.NotStarted).ToList();
}
}
NotificationValidationArgs jobArgs = new NotificationValidationArgs();
foreach (var item in list)
{
jobArgs.notificationValidationId = item.Id;
Task taskA = Task.Factory.StartNew(async (Object obj) =>
{
// doing some third party web service operations and db operations
}, jobArgs);
}
}
catch (Exception ex)
{
Logger.LogCritical(2001, ex, DateTime.Now.ToString() + " -> DataValidateWorker -> try 1 -> RDMS uow");
}
}
}
You don't await any of tasks, so lifetime of object ends while your task is still running.
Try to store all of the tasks in a collection and await them before method execution finishes.
Something like below:
public class DataValidateWorker : AsyncPeriodicBackgroundWorkerBase
{
public DataValidateWorker(AbpAsyncTimer timer, IServiceScopeFactory serviceScopeFactory) : base(timer, serviceScopeFactory)
{
}
protected override async Task DoWorkAsync(PeriodicBackgroundWorkerContext workerContext)
{
var tasks = new List<Task>();
foreach (var item in list)
{
tasks.Add(YourLongJob(arg)); // don't await here. collect in a collection
}
await Task.WhenAll(tasks); // wait until all of them is completed.
}
private async Task YourLongJob(object arg)
{
await Task.Delay(30 * 1000); // a long job
}
}

handling events that do not generate alerts in Apache Flink

I'm using Flink CEP and I need to handle even events that do not generate alerts. Please how can I do it?
I'm consuming events from rabbitMq and have defined some patterns. Now what I need to do is to send all the events received in another Queue to a distant API. I'm new at Flink so I followed the example in the documentation. When I try to send them after matching the received events with the patterns defined I only get those how match with the patterns. What I want to do is just to put an attribute to true in my events for example and send them all to the output queue.
public static void cep() throws Exception {
/**
* RabbitMQ connection
*/
final RMQConnectionConfig connectionConfig = new RMQConnectionConfig.Builder()
.setHost(HOST)
.setPort(PORTS[RD.getValue()])
.setUserName("guest")
.setPassword("guest")
.setVirtualHost("/")
.build();
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
/**
* Retrieve data inputEventstream from rabbitMQ
*/
final DataStream<String> inputEventstream = env
.addSource(new RMQSource<>(
connectionConfig, // config for the RabbitMQ connection
"input", // name of the RabbitMQ queue to consume
true, // use correlation ids; can be false if only at-least-once is required
new SimpleStringSchema())) // deserialization schema to turn messages into Java objects
.setParallelism(1);
/**
* Change DataStream<String> to DataStream<MonitoringEvent> where
* MonitoringEvent refer to a class which modelize our event.
*/
DataStream<MonitoringEvent> inputEventStreamClean = inputEventstream.flatMap(new Tokenizer());
Pattern<MonitoringEvent, ?> warningPattern = Pattern.<MonitoringEvent>begin("start")
.subtype(MonitoringEvent.class)
.where(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return Integer.parseInt(value.getAncienneChute())>=CHUTE_GRAVE;
}
}).or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return value.isChaiseRoulante();
}
}).or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return value.isDeambulateur();
}
}).or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return value.isDeambulateur();
}
})
.or(new SimpleCondition<MonitoringEvent>() {
#Override
public boolean filter(MonitoringEvent value) {
return EntityManager.getInstance().hasCurrentYearFallTwice(value.getIdClient());
}
});
//PatternStream<MonitoringEvent> fallPatternStream = CEP.pattern(inputEventStreamClean.keyBy("idClient"), warningPattern);
inputEventStreamClean.print();
// Create a pattern stream from our warning pattern
PatternStream<MonitoringEvent> tempPatternStream = CEP.pattern(
inputEventStreamClean.keyBy("idClient"),
warningPattern);
DataStream<FallWarning> warnings = tempPatternStream.select(
(Map<String, List<MonitoringEvent>> pattern) -> {
MonitoringEvent first = (MonitoringEvent) pattern.get("start").get(0);
return new FallWarning(first.getIdClient(), Integer.valueOf(first.getAncienneChute()));
}
);
// Alert pattern: Two consecutive temperature warnings appearing within a time interval of 20 seconds
Pattern<FallWarning, ?> alertPattern = Pattern.<FallWarning>begin("start");
// Create a pattern stream from our alert pattern
PatternStream<FallWarning> alertPatternStream = CEP.pattern(
//warnings.keyBy("idClient"),
warnings,
alertPattern);
// Generate alert
DataStream<Alert> alerts = alertPatternStream.flatSelect(
(Map<String, List<FallWarning>> pattern, Collector<Alert> out) -> {
FallWarning first = pattern.get("start").get(0);
if (first.idNiveauUrgence>=CHUTE_GRAVE && (first.isChaiseRoulante() || first.isDeambulateur() || first.isFracture())) {
out.collect(new Alert(first.idClient));
}
});
// Print the warning and alert events to stdout
warnings.print();
alerts.print(); // here I can send them to RabbitMq
env.execute();
}
You just need to add a Sink to your "alert" DataStream like
alert.addSink(new RMQSink<String>(
connectionConfig, // config for the RabbitMQ connection
"queueName", // name of the RabbitMQ queue to send messages to
new SimpleStringSchema())); // serialization schema to turn Java objects to messages
per example at
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/connectors/rabbitmq.html

How can I access sqlite database on a webserver in codename one

Pls How can I access sqlite database on the webserver in codename one? I can only use database API to access database on the device. In order to access this on the webserver I think is quite different thing. Pls I need a snippet code on this. Thanks
Use the code below, not tested and you may have to adjust it to suite your need. Leave a comment if there's an issue:
ConnectionRequest req = new ConnectionRequest() {
#Override
protected void handleException(Exception ex) {
//handle error
}
};
req.setUrl(YourURL);
req.setPost(true);
req.setHttpMethod("POST"); //Change to GET if necessary
req.setDuplicateSupported(true);
req.addArgument("argumentToSendThroughPostOrGet1", "value1");
req.addArgument("argumentToSendThroughPostOrGet2", "value2");
NetworkManager.getInstance().addToQueueAndWait(req);
if (req.getResponseCode() == 200) {
Map<String, Object> out = new HashMap<>();
Display.getInstance().invokeAndBlock(() -> {
JSONParser p = new JSONParser();
try (InputStreamReader r = new InputStreamReader(new ByteArrayInputStream(req.getResponseData()))) {
out.putAll(p.parseJSON(r));
} catch (IOException ex) {
//handle error
}
});
if (!out.isEmpty()) {
List<Map<String, Object>> responses = (List<Map<String, Object>>) out.get("response");
for (Object response : responses) {
Map res = (Map) response;
System.out.println(res.get("key"));
}
} else {
//handle error
}
} else {
//handle error
}
TEST JSON RESPONSE:
{
"response": [
{
"key": "I was returned",
}
]
}
EDIT:
To pass data from TextField:
req.addArgument("argumentToSendThroughPostOrGet1", myTextField.getText());
Based on your comment, you can read those arguments in PHP as simple as below:
$var1 = $_POST["argumentToSendThroughPostOrGet1"];
$var1 = $_GET["argumentToSendThroughPostOrGet1"]; // if GET method is used in Codename One
//Or use $_REQUEST which supports both methods but not advisable to be used for production
...
And you can use those variables in your php code normally.
Example of Usage with MySql Query:
class Connection {
function connect() {
$mysqli = mysqli_init();
$mysqli->real_connect("localhost", "username", "password", "databaseName") or die('Could not connect to database!');
$mysqli->query("SET NAMES 'UTF8'");
return $mysqli;
}
function close() {
mysqli_close($this->connect);
}
}
$connection = new Connection();
$mysqli = $connection->connect();
$mysqli->query("SELECT * FROM MyTable WHERE ColumnName LIKE '%$var1%' ORDER BY PrimaryKeyId ASC LIMIT 100");

Get Unread emails from Google API

I'm trying to get the count of unread email using google API, but not able. ANy help is highly appreciated. I'm not getting any error, but the count doesnt match the actual number shown in gmail.
try
{
String serviceAccountEmail = "xxx#developer.gserviceaccount.com";
var certificate = new X509Certificate2(#"C:\Projects\xxx\xyz\API Project-xxxxx.p12", "notasecret", X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
User = "xxx#gmail.com",
Scopes = new[] { Google.Apis.Gmail.v1.GmailService.Scope.GmailReadonly }
}.FromCertificate(certificate));
var gmailservice = new Google.Apis.Gmail.v1.GmailService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "GoogleApi3",
});
try
{
List<Message> lst = ListMessages(gmailservice, "xxx#gmail.com", "IN:INBOX IS:UNREAD");
}
catch (Exception e)
{
Console.WriteLine("An error occurred: " + e.Message);
}
}
catch (Exception ex)
{
}
Just do: labels.get(id="INBOX") and it has those types of stats (how many messages in that label, how many are unread, and same for threads).
https://developers.google.com/gmail/api/v1/reference/users/labels/get
You can use the ListMessages method from the API example (included for completeness) for searching:
private static List<Message> ListMessages(GmailService service, String userId, String query)
{
List<Message> result = new List<Message>();
UsersResource.MessagesResource.ListRequest request = service.Users.Messages.List(userId);
request.Q = query;
do
{
try
{
ListMessagesResponse response = request.Execute();
result.AddRange(response.Messages);
request.PageToken = response.NextPageToken;
}
catch (Exception e)
{
Console.WriteLine("An error occurred: " + e.Message);
}
} while (!String.IsNullOrEmpty(request.PageToken));
return result;
}
You can use this search method to find unread messages, for example like this:
List<Message> unreadMessageIDs = ListMessages(service, "me", "is:unread");
The q parameter (query) can be all kinds of stuff (it is the same as the gmail search bar on the top of the web interface), as documented here: https://support.google.com/mail/answer/7190?hl=en.
Note that you only a few parameters of the Message objects are set. If you want to retreive the messages you'll have to use GetMessage method from the api:
public static Message GetMessage(GmailService service, String userId, String messageId)
{
try
{
return service.Users.Messages.Get(userId, messageId).Execute();
}
catch (Exception e)
{
Console.WriteLine("An error occurred: " + e.Message);
}
return null;
}
I agree that the API is not straight forward and misses a lot of functionality.
Solution for .Net:
// Get UNREAD messages
public void getUnreadEmails(GmailService service)
{
UsersResource.MessagesResource.ListRequest Req_messages = service.Users.Messages.List("me");
// Filter by labels
Req_messages.LabelIds = new List<String>() { "INBOX", "UNREAD" };
// Get message list
IList<Message> messages = Req_messages.Execute().Messages;
if ((messages != null) && (messages.Count > 0))
{
foreach (Message List_msg in messages)
{
// Get message content
UsersResource.MessagesResource.GetRequest MsgReq = service.Users.Messages.Get("me", List_msg.Id);
Message msg = MsgReq.Execute();
Console.WriteLine(msg.Snippet);
Console.WriteLine("----------------------");
}
}
Console.Read();
}

How to make a correct select query on google endpoint?

I created an Android project using Google Cloud Endpoints, I created a model class Poll.java and now I want to make a query in the PollEndpoint.java class, to retrieve a poll with a specific author.
This is the query code in PollEndpoint.java
#ApiMethod(name = "getSpecificPoll", path="lastpoll")
public Poll getSpecificPoll(#Named("creator") String creator) {
EntityManager mgr = getEntityManager();
Poll specificPoll = null;
try {
Query query = mgr.createQuery("select from Poll where creator
='"+creator+"'");
specificPoll = (Poll) query.getSingleResult();
} finally {
mgr.close();
}
return specificPoll;
}
The code in the client part is:
private class PollQuery extends AsyncTask<Void, Void, Poll> {
#Override
protected Poll doInBackground(Void... params) {
Poll pollQuery = new Poll();
Pollendpoint.Builder builderQuery = new Pollendpoint.Builder(
AndroidHttp.newCompatibleTransport(), new
JacksonFactory(),null);
builderQuery = CloudEndpointUtils.updateBuilder(builderQuery);
Pollendpoint endpointQuery = builderQuery.build();
try {
pollQuery =
endpointQuery.getSpecificPoll("Bill").execute();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (pollQuery != null){
System.out.println(pollQuery.getKeyPoll().getId());
} else System.out.println("Null query");
return null;
}
The problem is that the server throw an exception:
javax.persistence.PersistenceException: FROM clause of query has class com.development.pollmeproject.Poll but no alias
at org.datanucleus.api.jpa.NucleusJPAHelper.getJPAExceptionForNucleusException(NucleusJPAHelper.java:302)
I think that the query statement is not correct, how can I write a correct one?
The query you provided is NOT valid JPQL. JPQL is more of the form
SELECT p FROM Poll p WHERE p.creator = :creatorParam
The error message does tell you that though, so I'm not sure why you're not sure of it

Resources