ABP: How to configure prefetch count when using rabbitmq for distributed events? - abp

In an abp framework application using RabbitMQ for distributed eventing, how do I configure the prefetch count?
At the moment I have the following issue:
I publish distributed events from my blazor app. The events are consumed by some worker worker apps. The worker then do stuff like transferring master data from one system to another.
Application 1 is firing 3000 Events - and only one consumer app is running. The one consumer receives all messages. If I now spawn a second consumer, it does not do anything because there are no messages left to consume.
Now it comes, that after a while (because processing those messages may take quite a time) I run into a lot of timeouts. Because of that I am currently integrating the outbox pattern, so that it's at least ensured that information does not get lost.
In a scenario where I scale my workers up and down depending on the amount of work to do, it's nessessary to be able to configure the current behavior.
I tried to take a look at AbpRabbitMqOptions and AbpRabbitMqEventBusOptions but sadly could not find a property that seems to match what I am looking for (adjusting prefetch).
At the moment my configuration looks something like this:
private void ConfigureDistributedEvents(ServiceConfigurationContext context, IConfiguration configuration)
{
Configure<AbpRabbitMqOptions>(options =>
{
});
Configure<AbpRabbitMqEventBusOptions>(options =>
{
});
// distributed lock
context.Services.AddSingleton<IDistributedLockProvider>(sp =>
{
var connection = ConnectionMultiplexer
.Connect(configuration["Redis:Configuration"]);
return new
RedisDistributedSynchronizationProvider(connection.GetDatabase());
});
// outbox/ inbox
Configure<AbpDistributedEventBusOptions>(options =>
{
options.Outboxes.Configure(config =>
{
config.UseDbContext<FooDbContext>();
});
options.Inboxes.Configure(config =>
{
config.UseDbContext<FooDbContext>();
});
});
}
And then there's appsettings.json, but it's standard:
"RabbitMQ": {
"Connections": {
"Default": {
"HostName": "localhost"
}
},
"EventBus": {
"ClientName": "Foo_Queue",
"ExchangeName": "Foo"
}
Is there anything I am missing?

Related

Define ExecutionStrategy at configuration level in EF core

I do have an application using EF core connected to azure SQL.
We were facing resilience failure , to which adding EnableRetryOnFailure() was the solution which I have configured.
services.AddEntityFrameworkSqlServer()
.AddDbContext<jmasdbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DataContext"), sqlServerOptionsAction: sqlActions =>
{
sqlActions.EnableRetryOnFailure(
maxRetryCount: 10,
maxRetryDelay: TimeSpan.FromSeconds(5),
errorNumbersToAdd: null);
}), ServiceLifetime.Transient);
Now, this one would fail when we have BeginTransaction throwing error as below
"The configured execution strategy
'SqlServerRetryingExecutionStrategy' does not support user-initiated
transactions. Use the execution strategy returned by
'DbContext.Database.CreateExecutionStrategy()' to execute all the
operations in the transaction as a retriable unit."
I looked into MS docs and they suggest a way to define Execution strategy manually using ExecuteAsync "https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections"
This has become pain as we do have more than 25+ places where we have these transactions.
I tried to have custom ExecutionStrategy at DbContext level but that did not help
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured && !string.IsNullOrEmpty(ConnectionString))
{
optionsBuilder.UseSqlServer(ConnectionString, options =>
{
options.ExecutionStrategy((dependencies) =>
{
return new SqlServerRetryingExecutionStrategy(dependencies, maxRetryCount: 3, maxRetryDelay: TimeSpan.FromSeconds(5), errorNumbersToAdd: new List<int> { 4060 });
});
});
}
}
Is there any way to have this defined at global level? We do not want different strategy for each operation, whenever there is failure , we want that to be rollback completely and start from beginning.

Google Cloud PubSub send the message to more than one consumer (in the same subscription)

I have a Java SpringBoot2 application (app1) that sends messages to a Google Cloud PubSub topic (it is the publisher).
Other Java SpringBoot2 application (app2) is subscribed to a subscription to receive those messages. But in this case, I have more than one instance (the k8s auto-scaling is enabled), so I have more than one pod for this app consuming messages from the PubSub.
Some messages are consumed by one instance of app2, but many others are sent to more than one app2 instance, so the messages process is duplicated for these messages.
Here is the code of consumer (app2):
private final static int ACK_DEAD_LINE_IN_SECONDS = 30;
private static final long POLLING_PERIOD_MS = 250L;
private static final int WINDOW_MAX_SIZE = 1000;
private static final Duration WINDOW_MAX_TIME = Duration.ofSeconds(1L);
#Autowired
private PubSubAdmin pubSubAdmin;
#Bean
public ApplicationRunner runner(PubSubReactiveFactory reactiveFactory) {
return args -> {
createSubscription("subscription-id", "topic-id", ACK_DEAD_LINE_IN_SECONDS);
reactiveFactory.poll(subscription, POLLING_PERIOD_MS) // Poll the PubSub periodically
.map(msg -> Pair.of(msg, getMessageValue(msg))) // Extract the message as a pair
.bufferTimeout(WINDOW_MAX_SIZE, WINDOW_MAX_TIME) // Create a buffer of messages to bulk process
.flatMap(this::processBuffer) // Process the buffer
.doOnError(e -> log.error("Error processing event window", e))
.retry()
.subscribe();
};
}
private void createSubscription(String subscriptionName, String topicName, int ackDeadline) {
pubSubAdmin.createTopic(topicName);
try {
pubSubAdmin.createSubscription(subscriptionName, topicName, ackDeadline);
} catch (AlreadyExistsException e) {
log.info("Pubsub subscription '{}' already configured for topic '{}': {}", subscriptionName, topicName, e.getMessage());
}
}
private Flux<Void> processBuffer(List<Pair<AcknowledgeablePubsubMessage, PreparedRecordEvent>> msgsWindow) {
return Flux.fromStream(
msgsWindow.stream()
.collect(Collectors.groupingBy(msg -> msg.getRight().getData())) // Group the messages by same data
.values()
.stream()
)
.flatMap(this::processDataBuffer);
}
private Mono<Void> processDataBuffer(List<Pair<AcknowledgeablePubsubMessage, PreparedRecordEvent>> dataMsgsWindow) {
return processData(
dataMsgsWindow.get(0).getRight().getData(),
dataMsgsWindow.stream()
.map(Pair::getRight)
.map(PreparedRecordEvent::getRecord)
.collect(Collectors.toSet())
)
.doOnSuccess(it ->
dataMsgsWindow.forEach(msg -> {
log.info("Mark msg ACK");
msg.getLeft().ack();
})
)
.doOnError(e -> {
log.error("Error on PreparedRecordEvent event", e);
dataMsgsWindow.forEach(msg -> {
log.error("Mark msg NACK");
msg.getLeft().nack();
});
})
.retry();
}
private Mono<Void> processData(Data data, Set<Record> records) {
// For each message, make calculations over the records associated to the data
final DataQuality calculated = calculatorService.calculateDataQualityFor(data, records); // Arithmetic calculations
return this.daasClient.updateMetrics(calculated) // Update DB record with a DaaS to wrap DB access
.flatMap(it -> {
if (it.getProcessedRows() >= it.getValidRows()) {
return finish(data);
}
return Mono.just(data);
})
.then();
}
private Mono<Data> finish(Data data) {
return dataClient.updateStatus(data.getId, DataStatus.DONE) // Update DB record with a DaaS to wrap DB access
.doOnSuccess(updatedData -> pubSubClient.publish(
new Qa0DonedataEvent(updatedData) // Publis a new event in other topic
))
.doOnError(err -> {
log.error("Error finishing data");
})
.onErrorReturn(data);
}
I need that each messages is consumed by one and only one app2 instance. Anybody know if this is possible? Any idea to achieve this?
Maybe the right way is to create one subscription for each app2 instance and configure the topic to send each message t exactly one subscription instead of to every one. It is possible?
According to the official documentation, once a message is sent to a subscriber, Pub/Sub tries not to deliver it to any other subscriber on the same subscription (app2 instances are subscriber of the same subscription):
Once a message is sent to a subscriber, the subscriber should
acknowledge the message. A message is considered outstanding once it
has been sent out for delivery and before a subscriber acknowledges
it. Pub/Sub will repeatedly attempt to deliver any message that has
not been acknowledged. While a message is outstanding to a subscriber,
however, Pub/Sub tries not to deliver it to any other subscriber on
the same subscription. The subscriber has a configurable, limited
amount of time -- known as the ackDeadline -- to acknowledge the
outstanding message. Once the deadline passes, the message is no
longer considered outstanding, and Pub/Sub will attempt to redeliver
the message
In general, Cloud Pub/Sub has at-least-once delivery semantics. That means that it will be possible to have messages redelivered that have already been acked and to have messages delivered to multiple subscribers receive the same message for a subscription. These two cases should be relatively rare for a well-behaved subscriber, but without keeping track of the IDs of all messages delivered across all subscribers, it will not be possible to guarantee that there won't be duplicates.
If it is happening with some frequency, it would be good to check if your messages are getting acknowledged within the ack deadline. You are buffering messages for 1s, which should be relatively small compared to your ack deadline of 30s, but it also depends on how long the messages ultimately take to process. For example, if the buffer is being processed in sequential order, it could be that the later messages in your 1000-message buffer aren't being processed in time. You could look at the subscription/expired_ack_deadlines_count metric in Cloud Monitoring to determine if it is indeed the case that your acks for messages are late. Note that late acks for even a small number of messages could result in more duplicates. See the "Message Redelivery & Duplication Rate" section of the Fine-tuning Pub/Sub performance with batch and flow control settings post.
Ok, after doing tests, reading documentation and reviewing the code, I have found a "small" error in it.
We had a wrong "retry" on the "processDataBuffer" method, so when an error happened, the messages in the buffer were marked as NACK, so they were delivered to another instance, but due to retry, they were executed again, correctly, so messages were also marked as ACK.
For this, some of them were prosecuted twice.
private Mono<Void> processDataBuffer(List<Pair<AcknowledgeablePubsubMessage, PreparedRecordEvent>> dataMsgsWindow) {
return processData(
dataMsgsWindow.get(0).getRight().getData(),
dataMsgsWindow.stream()
.map(Pair::getRight)
.map(PreparedRecordEvent::getRecord)
.collect(Collectors.toSet())
)
.doOnSuccess(it ->
dataMsgsWindow.forEach(msg -> {
log.info("Mark msg ACK");
msg.getLeft().ack();
})
)
.doOnError(e -> {
log.error("Error on PreparedRecordEvent event", e);
dataMsgsWindow.forEach(msg -> {
log.error("Mark msg NACK");
msg.getLeft().nack();
});
})
.retry(); // this retry has been deleted
}
My question is resolved.
Once corrected the mentioned bug, I still receive duplicated messages. It is accepted that Google Cloud's PubSub does not guarantee the "exactly one deliver" when you use buffers or windows. This is exactly my scenario, so I have to implement a mechanism to remove dups based on a message id.

Controlling the reactor execution for certain use cases (or get response at certain point)

I am trying to update a document in MongoDB but cannot get to checking updated status and responding back to user. Below is my code:
#Autowired
ReactiveMongoTemplate mongoTemplate;
public Mono<String> updateUser(UserIn userIn) {
UserResponse resp = new UserResponse();
mongoTemplate.findAndModify(query, update, User.class)
//.doOnSuccess(bsItem -> {
.flatMap(user -> {
if(user.getItemId().equals(userIn.getId("_id")))
resp.setStatus("Updated");
else
resp.setStatus("Failed");
return Mono.just(resp);
}).subscribe();
return Mono.just(resp.getStatus());
}
Even though update is happening in mongodb, it throws NPE while returning. How to get the control after reactor operator is executed here?
You should almost never subscribe in your own application.
The subscriber is the client that initiated the call in this case it is probably the web application. You application is just relaying the data, so your application is a publisher which means you should not subscribe. The web app subscribes.
Try this.
#Autowired
ReactiveMongoTemplate mongoTemplate;
public Mono<String> updateUser(UserIn userIn) {
return mongoTemplate.findAndModify(query, update, User.class)
.flatMap(user -> {
final UserResponse resp = new UserResponse();
if(user.getItemId().equals(userIn.getId("_id")))
resp.setStatus("Updated");
else
resp.setStatus("Failed");
return Mono.just(resp.getStatus());
});
}
A mono is not like a stream, you fetch, map and return, all in the same mono, like a chain of events. An event chain.

SignalR MSSQL scale

Have hard time to get push notifications from server when using MSSQL scalling.
Message flow is one way from server to subscribers (are distributed by groups)
Push messages are not received on UI with the following host configuration:
GlobalHost.DependencyResolver.UseSqlServer(new SqlScaleoutConfiguration(ConfigurationManager.ConnectionStrings["SignalR"].ConnectionString));
Messages are created in SignalRDb and records are present in tables. However they do not reach UI. With SQL scaling disabled all messages propagates to UI successfully.
Here is my Notification code:
public void OnNext(ResultModel value)
{
Clients.Group(group).notify(value);
}
OnNext method always executes witho or without Scalling also group and value are correct. No exceptions thrown on UI or beckend. Below is UI part:
var hubProxy = $.connection.visitsHub;
hubProxy.client.notify = function (updatedResult) {
console.log(updatedResult.Id);
});
};
Any help is appreciated.

Load Model async using Task

I'd Like to know How can I Load the model async from the service using Task.
Until now I used BackgroundWorker in the view model.
Can someone give me a clear example?
Thanks.
To load a model using the TPL, here's some indicative code...
Task t = new Task(() =>
{
// broadcast start of busy state
});
t.ContinueWith((z) =>
{
// load the model
});
t.ContinueWith((x) =>
{
// broadcast end of busy state
});
t.Start();
The first task lets the UI know that the app is entering a busy state so that the user can be supplied with visual clues.
The second task performs the heavy lifting.
The final task announces that the work is complete. (x) can be queried to determine the appropriate UI message (it worked or it didn't work)
Task documentation is here http://msdn.microsoft.com/en-us/library/vstudio/system.threading.tasks.task

Resources