Google PubSub Simultaneous Publish Requests - google-cloud-pubsub

In Google PubSub, the publish call from the client can be called asynchronously. Because of this, I would think that it would be possible to have multiple publish requests triggered and sent to the server, all at the same time, especially if the batch thresholds are too low.
If this is true, how does the pubsub client control the number of simultaneous publish requests that can be created? Is there a hard limit, or an error that can occur if too many requests are created? Is this the intended use of having an asynchronous publisher, or is simply to allow for other non-publishing activity to occur?
Though this question applies to any of the clients, we are specifically having an issue with the C# client, and are intermittently receiving the following error:
Grpc.Core.RpcException: Status(StatusCode=DeadlineExceeded, Detail="Deadline Exceeded")
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass0_0`2.<<WithRetry>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
My thought is that we are sending too many publish requests..., but I am not sure.

I would advice using the raw gRPC code, but use the client library that has a very thin wrapper.
Looking at the client source code always helps me, you can find for c# code here PublisherClient.cs (thin wrapper)
If you are using PublishAsync it queues/batch the messages anyway, the behaviour is controlled the settings you give to the client (see PublisherServiceApiClient for how to tune it). You can also control the number of client connections that are used to send the queues in the client. I suggest playing with the batch-size first, then the number of connections till you found your sweet spot for your throughput.

Related

replay failure messages whenever I want in any time

#camel Hi Devs, I am currently working on a camel to transform messages from Source to Target systems, I am stuck with an issue i.e., I want redelivery my message when any exceptions occurred or due to failure caused by endpoints. I had checked the camel docs then I got a info related to Redelivery Polices It is working as per the given delay time. But my problem is used to replay messages whenever I want. For example last year there are some messages which got failure those payloads are stored in my system. So I want to replay those messages this year. like Replay. Can any devs help me on this cause? Thanks.
You can simply use the DeadLetterChannel EIP (https://camel.apache.org/components/3.16.x/eips/dead-letter-channel.html).
This will put your failed messages in a special channel (typically a persistant JMS queue). To replay a message, you only have to move it from the DLC ("myqueue.dead") to the original queue ("myqueue") .

How can I send a notification to an endpoint when a Throttler starts/stops throttling?

As the title says, I would like to send a notification to an endpoint if messages to it start getting throttled, and another message when the throttling stops.
I currently have the following (very basic) route configuration:
from("test-jms:queue:test.queue")
.throttle(2)
.to("file://test");
This configuration throttles messages just fine, but I need a way to let the consumer know that the messages are being throttled.
When the Throttler starts throttling, I would like to send a notification to the 'to' endpoint so those reading the messages know that they are being throttled. I would also like to be able to send another message when the Throttler is no longer throttling, so the consumer knows the messages are up to date.
This doesn't appear to be something the Throttler does. The only way I see of getting a notification when it starts throttling is setting rejectExecution to true, at which point it will throw an exception. The problem is that execution stops at that point, and no more messages are passed through (since an exception was thrown).
My current thoughts are that I will need to create a custom bean/processor/something that performs essentially the same function as the Throttler, but also injects a message when the throttling starts or stops. I don't want to do that unless I really need to, though. Any help is appreciated. Thanks!
No the throttler eip does not support such information (however you may be able to grab some statistics via JMX). A different thought would be to reverse the direction so the consumers signals upstream when they want new messages (this is what reactive systems does).
I assume the above to write to a file is just some example, what consumers are you using in real life, and do they really need to know that some messages are backed up within a short-time period of 1 second because they are throttled? Also since your source is JMS, you can also look at the route throttling policy, where you can suspend/resume the JMS consumer instead of using the throttler EIP.

Tasks targeted at dynamic backend fail frequently, silently

I had converted some tasks to run on a dynamic backend.
The tasks are failing silently [no logged error, no retry, nothing] ~20% of the time (min:10%, max:60%, sample:large, long term). Switching the task away from the backend restores retries and gets the failure rate back to ~0%.
Any ideas?
Converting it to a backend exacerbated the problem but wasn't the problem.
I had specified a task_retry_limit and the queue was a push queue. With a backend the number of instances is specified. (I believe you can replicate this issue on the frontend by ramping up requests rapidly, to a big number).
Tasks were failing 503: Instance Unavailable until they hit the task_retry_limit. This is visible temporarily in Task Queues, but will not show up in Logs.
I should be using pull queues. Even if my use case was stupid I'd probably +1 a task dying due to multiple 503: Instance Unavailable logging something so it doesn't appear like a phantom task.
Which runtime are you using on the backend?
Try running the backend for a bit without dynamic set to true and exercise the failing component.
On my project, I have seen tasks that target a static backend disappear on occasion, but no where near the rate you are seeing.

Service Broker error handling simulation

I work now in project in which multiple POSes should be synchronized to main server by using Server Broker feature. Now i prepare error handling for this solution and want to show to client how it works. That means i will prepare test scripts for every kind of errors and client runs it on test POS to see if it errors processed correctly.
We will use SQL Server 2008R2 with poison message = OFF.
Message type=XML (but inside can be different type of data, some nodes will contain BLOBs).
POSes will be outside of domen so transport will be secured (but no dialog encryption).
I divide errors on several sub-groups:
Logical error (e.g. string instead
of number) .It will be processed by
TRY-CATCH block on server side.It is
easy to simulate
Service Broker configuration error
(message or will be not returned or
cannot reach destination). I think
it can be handled by using SQl
Server Service Broker events and
simulation will be some kind of "bad
configuration" (SB GUID,service name
etc)
Transport error. This is when we
have a broken message. In fact it is
client opinion to test such kind of
error. I do not know if we have
secured transport level
(certificate) we are protected from
such kind of error. Another question
how can I simulate this.
Questions:
are there another error types?
is #2 error handling logic described good enough?
how to handle and simulate #3?
The second part of my article here goes into a discussion of Service Broker errors, how they occur and how to handle them. The important thing for you is to distinguish between two categories of errors:
recoverable: transport problems, most configuration errors like bad routing, an unreachable server. All these will result not in a SSB error, but in a delay. Messages will stay in transmission_queue expecting that the problem is transient and can be solved, including some configuration problems. Once the problem is solved, SSB will retry and the message gets delivered.
unrecoverable: these are problems SSB deems as non-recoverable, eg. a bad message format. In such a case the conversation will be aborted and both endpoints receive a Error message.
I also have an article Error Handling in Service Broker procedures that discusses some of the topics particular to exception handling in SSB activated context.
A final note: I strongly discourage you from turning poison message detection OFF. It is much better to disable the processing than to spin ad-nauseam w/o making progress because of a poison message.
As on the topic on how to simulate a corrupted message: is hard to simulate (you can try with setting up a port forwarder that lets all traffic pass by, but randomly corrupts some of it) but is rather pointless. All SSB traffic, even when in clear text, is cryptographically signed and any message corruption would result in an abrupt disconnect due to message signing validation failure.

SQL Server Service Broker Service Disappearing (Automatically Deleted)?

I've implemented a messaging system over SQL Server Service Broker. It is working great, with the sole exception that every once in a while (maybe once per week per server) my initiator service just vanishes without a trace. The corresponding queue is still there, but the service is missing.
Obviously this causes problems in my system. It's a simple matter to recreate the service by hand, but I'm confused as to what might cause this behavior. I understand that automatic poison message handling causes queues to be disabled, but I don't see anything that indicates services can be disabled or deleted automatically.
When this happens, I usually have a large backlog of messages in multiple application queues, but nothing extreme. Total message backlog is around 200,000.
Does anyone know what might be happening here?
You must have a bug of some sort that issues a DROP SERVICE statement. That is the only way a service gets deleted.
Check the default trace, the DROP statement gets traced and saved into it so you can track down the application/user/statement that issues the DROP. Check sys.traces to find the location of the default trace then open the .TRC file in Profiler.

Resources