Netty synchronous client with asynchronous callers - request

I am creating a server which consumes commands from numerous sources such as JMS, SNMP, HTTP etc. These are all asynchronous and are working fine. The server maintains a single connection to a single item of legacy hardware which has a request/reply architecture with a custom TCP protocol.
Ideally I would like a single command like this blocking type method
public Response issueCommandToLegacyHardware(Command command)
or this asynchronous type method
public Future<Response> issueCommandToLegacyHardware(Command command)
I am relatively new to Netty and asynchronous programming, basically learning it as I go along. My current thought is that my LegacyHardwareClient class will have public synchronized issueCommandToLegacyHardware(Command command), will make a write to the client channel to the legacy hardware, then take() from a SynchronousQueue<Response> which will block. The ChannelInboundHandler in the pipeline will offer() a Response to the SynchronousQueue>Response> which will allow the take() to unblock and receive the data.
Is this too convoluted? Are there any examples around of synchronous Netty client implementations that I can look at? Are there any best practices for Netty?
I could obviously use just standard Java sockets however the power of Netty for parsing custom protocols along with the ease of maintaniability is far too great to give up.
UPDATE:
Just regarding the implementation, I used an ArrayBlockingQueue<>() and I used put() and remove() rather than offer() and remove(). Because I wanted to ensure that subsequent requests to the legacy hardware were only sent when any active requests had been replied to as the legacy hardware behaviour is not known with certainty otherwise.
The reason offer() and remove() did not work for me was that the offer() command would not pass anything if there was not an actively blocking take() request no the other side. The converse is true that remove() would not return anything unless there was a blocking put() call inserting data.
I couldn't use a put()/remove() since the remove() statement would never be reached since there was no request written to the channel to trigger the event from where the remove() would be called. I couldn't use offer()/take() since the offer() statement would return false since the take() call hadn't been executed yet.
Using the ArrayBlockingQueue<>() with a capacity of 1, it ensured that only one command could be executed at once. Any other commands would block until there was sufficient room to insert, with a capacity of 1 this meant it had to be empty. The emptying of the queue was done once a response had been received from the legacy hardware. This ensured a nice synchronous behaviour toward the legacy hardware but provided an asynchronous API to the users of the legacy hardware, for which there are many.

Instead of designing your application on a blocking manner using SynchronousQueue<Response>, design it in a nonblocking manner using SynchronousQueue<Promise<Response>>.
Your public Future<Response> issueCommandToLegacyHardware(Command command) should then use offer() to add a DefaultPromise<>() to the Queue, and then the netty pipeline can use remove() to get the response for that request, notice I used remove() instead of take(), since only under exceptional circumstances, there is none element present.
A quick implementation of this might be:
public class MyLastHandler extends SimpleInboundHandler<Response> {
private final SynchronousQueue<Promise<Response>> queue;
public MyLastHandler (SynchronousQueue<Promise<Response>> queue) {
super();
this.queue = queue;
}
// The following is called messageReceived(ChannelHandlerContext, Response) in 5.0.
#Override
public void channelRead0(ChannelHandlerContext ctx, Response msg) {
this.queue.remove().setSuccss(msg); // Or setFailure(Throwable)
}
}
The above handler should be placed last in the chain.
The implementation of public Future<Response> issueCommandToLegacyHardware(Command command) can look:
Channel channel = ....;
SynchronousQueue<Promise<Response>> queue = ....;
public Future<Response> issueCommandToLegacyHardware(Command command) {
return issueCommandToLegacyHardware(command, channel.eventLoop().newPromise());
}
public Future<Response> issueCommandToLegacyHardware(Command command, Promise<Response> promise) {
queue.offer(promise);
channel.write(command);
return promise;
}
Using the approach with the overload on issueCommandToLegacyHardware is also the design pattern used for Channel.write, this makes it really flexable.
This design pattern can be used as follows in client code:
issueCommandToLegacyHardware(
Command.TAKE_OVER_THE_WORLD_WITH_FIRE,
channel.eventLoop().newPromise()
).addListener(
(Future<Response> f) -> {
System.out.println("We have taken over the world: " + f.get());
}
);
The advantage of this design pattern is that no unneeded blocking is used anywhere, just plain async logic.
Appendix I: Javadoc:
Promise Future DefaultPromise

Related

Unit testing Flink Topology without using MiniClusterWithClientResource

I have a Fink topology that consists of multiple Map and FlatMap transformations. The source/sink are from/to Kafka. The Kakfa records are of type Envelope (defined by someone else), and are not marked as "serializable". I want to Unit test this topology.
I defined a simple SourceFunction that returns a list of Envelope as the source:
public class MySource extends RichParallelSourceFunction<Envelope> {
private List<Envelope> input;
public MySource(List<Envelope> input) {
this.input = input;
}
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
}
#Override
public void run(SourceContext<Envelope> ctx) throws Exception {
for (Envelope listElement : inputOfSubtask) {
ctx.collect(listElement);
}
}
#Override
public void cancel() {}
}
I am using MiniClusterWithClientResource to Unit test the topology. I ran onto two problems:
I need to make MySource serializable, as Flink wants/needs to serialize the source. As a workaround, I make input transient. The allowed the code to compile.
Then I ran into the runtime error:
org.apache.flink.api.common.functions.InvalidTypesException: The return type of function 'Custom Source' could not be determined automatically, due to type erasure. You can give type information hints by using the returns(...) method on the result of the transformation call, or by letting your function implement the 'ResultTypeQueryable' interface.
I am trying to understand why I am getting this error, which I was not getting before when the topology is consuming from a kafka cluster using a KafkaConsumer. I found a workaround by providing the Type info using the following:
.returns(TypeInformation.of(Envelope.class))
However, during runtime, after deserialization, input is set to null (obviously, as there is no deserialization method defined.).
Questions:
Can someone please help me understand why I am getting the InvalidTypesException exception?
Why if MySource being deserialized/serialized? Is there a way I can void this while usingMiniClusterWithClientResource?
I could hack some writeObject() and readObject() method in MySource. But I prefer to avoid that route. Is it possible to use some framework / class to test the Topology without providing a Source (and Sink) that is Serializable? It would be great if I could use something like KeyedOneInputStreamOperatorTestHarness that I could pass as topology, and avoid the whole deserialization / serialization step in the beginning.
Any ideas / pointers would be greatly appreciated.
Thank you,
Ahmed.
"why I am getting the InvalidTypesException exception?"
Not sure, usually I'd need to see the workflow definition to understand where the type information is getting dropped.
"Why if MySource being deserialized/serialized?"
Because Flink distributes operators to multiple tasks on multiple machines by serializing them, then sending over the network, and then deserializing.
"Is there a way I can void this while using MiniClusterWithClientResource?"
Yes. Since the MiniCluster runs in a single JVM, you can use a static ConcurrentLinkedQueue to hold all of the Envelope records, and your MySource just reads from this queue.
Nit: Your MySource should set a transient boolean running flag to true in the open() method, false in the cancel() method, and check it in the run() method's loop.

How to build a async rest endpoint that calls blocking action in worker thread and replies instantly (Quarkus)

I checked the docs and stackoverflow but didn't find exactly a suiting approach.
E.g. this post seems very close: Dispatch a blocking service in a Reactive REST GET endpoint with Quarkus/Mutiny
However, I don't want so much unneccessary boilerplate code in my service, at best, no service code change at all.
I generally just want to call a service method which uses entity manager and thus is a blocking action, however, want to return a string to the caller immidiately like "query started" or something. I don't need a callback object, it's just a fire and forget approach.
I tried something like this
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom()
.item("query started")
.call(() -> service.startLongRunningQuery());
}
But it's not working -> Error message returned to the caller:
You have attempted to perform a blocking operation on a IO thread. This is not allowed, as blocking the IO thread will cause major performance issues with your application. If you want to perform blocking EntityManager operations make sure you are doing it from a worker thread.",
I actually expected quarkus takes care to distribute the tasks accordingly, that is, rest call to io thread and blocking entity manager operations to worker thread.
So I must using it wrong.
UPDATE:
Also tried an proposed workaround that I found in https://github.com/quarkusio/quarkus/issues/11535 changing the method body to
return Uni.createFrom()
.item("query started")
.emitOn(Infrastructure.getDefaultWorkerPool())
.invoke(()-> service.startLongRunningQuery());
Now I don't get an error, but service.startLongRunningQuery() is not invoked, thus no logs and no query is actually sent to db.
Same with (How to call long running blocking void returning method with Mutiny reactive programming?):
return Uni.createFrom()
.item(() ->service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
Same with (How to run blocking codes on another thread and make http request return immediately):
ExecutorService executor = Executors.newFixedThreadPool(10, r -> new Thread(r, "CUSTOM_THREAD"));
return Uni.createFrom()
.item(() -> service.startLongRunningQuery())
.runSubscriptionOn(executor);
Any idea why service.startLongRunningQuery() is not called at all and how to achieve fire and forget behaviour, assuming rest call handled via IO thread and service call handled by worker thread?
It depends if you want to return immediately (before your startLongRunningQuery operation is effectively executed), or if you want to wait until the operation completes.
If the first case, use something like:
#Inject EventBus bus;
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public void triggerQuery() {
bus.send("some-address", "my payload");
}
#Blocking // Will be called on a worker thread
#ConsumeEvent("some-address")
public void executeQuery(String payload) {
service.startLongRunningQuery();
}
In the second case, you need to execute the query on a worker thread.
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom(() -> service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool());
}
Note that you need RESTEasy Reactive for this to work (and not classic RESTEasy). If you use classic RESTEasy, you would need the quarkus-resteasy-mutiny extension (but I would recommend using RESTEasy Reactive, it will be way more efficient).
Use the EventBus for that https://quarkus.io/guides/reactive-event-bus
Send and forget is the way to go.

How do i gracefully shutdown an StreamExecutionEnvironment that was started with the execute method

I am trying to shutdown a StreamExecutionEnvironment that is started during one of our junit Integration tests. Once all the items in the stream are processed i want to be able to shutdown this execution environment in a deterministic fashion.
Right now when i call StreamExecutionEnvironment.execute method it never returns from that call.
[May be I am too late here, but will answer for those people who are looking for an answer or hint]
Actually what you need to do is gracefully exit from the SourceFunction<T>.
Then the whole StreamExecutionEnvironment will be auto closed. To do that, you may need a special end event to send into your source function.
Write or extend (if you are using pre-defined source function) to check the special incoming event, which will be emitted at the end of your integration tests, and break the loop or unsubscribe from the source. The basic pattern would be shown below.
public class TestSource implements SourceFunction<Event> {
public void run(SourceContext<T> ctx) throws Exception {
while (hasContent()) {
Event event = readNextEvent();
if (isAnEndEvent(event)) {
break;
}
ctx.collect(event);
}
}
}
Depends on your situation, in junit tests, you should send this special event at the end of each/all test cases.
// or #AfterClass
#After
public void doFinish() {
// send the special event from a different thread.
}
Sometimes you might have to do this from a different thread (basically where you generate test events). Not like above.
It is recommended you having a separate source function implementation for your tests because of this matter, so it is easier to modify to accept a special close event. But never in your actual source function which is expecting to go into the production. :)
The javadoc of SourceFunction also explains about a stop() function, which you can see an example from how TwitterSource has been implemented.
Gracefully Stopping Functions
Functions may additionally implement the {#link
org.apache.flink.api.common.functions.StoppableFunction} interface.
"Stopping" a function, in contrast to "canceling" means a graceful
exit that leaves the state and the emitted elements in a consistent
state
.

Why does async method block MVVM Light Relay Command

I'm new to async and need to consume an API that has it. I've read I should "go async all the way" back the UI command. So far I've propagated async back to my view model.
The code below blocks the Upload button in my UI. Is this because the RelayCommand's implementation calls it using await?
// In the ViewModel:
public MyViewModel()
{
...
UploadRelayCommand = new RelayCommand(mUpload, () => CanUpload);
...
}
private async void mUpload()
{
...
await mModel.Upload();
...
}
// In the model:
public async Task UploadToDatabase()
{
...
projectToUse = await api.CreateProjectAsync(ProjectName);
...
}
// In the API
public async Task<Project> CreateProjectAsync(Project project){}
Update: Sven's comment led me to find that CreateProjectAsync was running in a simulation mode that synchronously wrote to memory. When I wrapped that end code in Task.Run, it no longer blocked my Upload button. When not running in simulation mode, the API natively makes asynchronous calls to interact with a web server, so those also don't block.
Thanks.
The await itself will not block your UI. It is more likely that your Upload() method does not do any real asynchronous work.
(As Jim suggested, Task.Run() can be used in such a case. It will use the thread pool to run the operation in the background. Generally speaking, for IO-bound operations like uploads/downloads you should check if your API supports asynchronous calls natively. If such an implementation exists, it may make more efficient use of system resources than using a thread.)

Camel RaabitMQ Acknowledgement

I am using Camel for my messaging application. In my use case I have a producer (which is RabbitMQ here), and the Consumer is a bean.
from("rabbitmq://127.0.0.1:5672/exDemo?queue=testQueue&username=guest&password=guest&autoAck=false&durable=true&exchangeType=direct&autoDelete=false")
.throttle(100).timePeriodMillis(10000)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
MyCustomConsumer.consume(exchange.getIn().getBody())
}
});
Apparently, when autoAck is false, acknowledgement is sent when the process() execution is finished (please correct me if I am wrong here)
Now I don't want to acknowledge when the process() execution is finished, I want to do it at a later stage. I have a BlockingQueue in my MyCustomConsumer where consume() is putting messages, and MyCustomConsumer has different mechanism to process them. I want to acknowledge message only when MyCustomConsumer finishes processing messages from BlockingQueue. How can I achieve this?
You can consider to use the camel AsyncProcessor API to call the callback done once you processing the message from BlockingQueue.
I bumped into the same issue.
The Camel RabbitMQConsumer.RabbitConsumer implementation does
consumer.getProcessor().process(exchange);
long deliveryTag = envelope.getDeliveryTag();
if (!consumer.endpoint.isAutoAck()) {
log.trace("Acknowledging receipt [delivery_tag={}]", deliveryTag);
channel.basicAck(deliveryTag, false);
}
So it's just expecting a synchronous processor.
If you bind this to a seda route for instance, the process method returns immediately and you're pretty much back to the autoAck situation.
My understanding is that we need to make our own RabbitMQ component to do something like
consumer.getAsyncProcessor().process(exchange, new AsynCallback() {
public void done(doneSync) {
if (!consumer.endpoint.isAutoAck()) {
long deliveryTag = envelope.getDeliveryTag();
log.trace("Acknowledging receipt [delivery_tag={}]", deliveryTag);
channel.basicAck(deliveryTag, false);
}
}
});
Even then, the semantics of the "doneSync" parameter is not clear to me. I think it's merely a marker to identify whether we're dealing with a real async processor or a synchronous processor that was automatically wrapped into an async one.
Maybe someone can validate or invalidate this solution?
Is there a lighter/faster/stronger alternative?
Or could this be suggested as the default implementation for the RabbitMQConsumer?

Resources