priotizing a message on Google Pubsub - google-cloud-pubsub

I have a pubsub topic with a number of pull subscriptions. I would like some mechanism where I can publish a message with a "priority" label that causes the message to jump as near to the front of the queue as possible.
I don't need any guaranteed ordering semantics, just a "best effort" prioritization mechanism.
Is anything like this possible with pubsub?

No such mechanism exists within Google Cloud Pub/Sub, no. Such a feature really only becomes relevant if your subscribers are not able to keep up with the rate of publishing and consequently, a backlog is building up. If subscribers are keeping up and processing and acking messages quickly, then the notion of "priority" messages isn't really necessary.
If a backlog is being built up and some messages needs to be processed with higher priority, then one approach is to create a "high-priority" topic and subscription. The subscribers subscribe to this subscription as well as the "normal" subscription and prioritize processing messages from the "high-priority" subscription whenever they arrive.

Providing an example implementation to #Kamal's answer in an attempt to provide more context to:
...prioritize processing messages from the "high-priority" subscription whenever they arrive
import logging
import threading
from google.cloud import pubsub
from google.cloud.pubsub_v1.types import FlowControl
logging.basicConfig(format="%(asctime)s %(message)s", level=logging.INFO)
c = threading.Condition()
n_priority_messages = 0
def priority_callback(message):
logging.info(f"PRIORITY received: {message.message_id}")
global n_priority_messages
c.acquire()
n_priority_messages += 1
c.release()
handle_message(message)
logging.info(f"PRIORITY handled: {message.message_id}")
c.acquire()
n_priority_messages -= 1
if n_priority_messages == 0:
c.notify_all()
c.release()
def batch_callback(message):
logging.info(f"BATCH received: {message.message_id}")
done = False
modify_count = 0
global n_priority_messages
while not done:
c.acquire()
priority_queue_is_empty = n_priority_messages == 0
c.release()
if priority_queue_is_empty:
handle_message(message)
logging.info(f"BATCH handled: {message.message_id}")
done = True
else:
message.modify_ack_deadline(15)
modify_count += 1
logging.info(
f"BATCH modifyed deadline: {message.message_id} - count: {modify_count}"
)
c.acquire()
c.wait(timeout=10)
c.release()
subscriber = pubsub.SubscriberClient()
subscriber.subscribe(
subscription=batch_subscription,
callback=batch_callback,
# adjust according to latency/throughput requirements
flow_control=FlowControl(max_messages=5)
)
pull_future = subscriber.subscribe(
subscription=priority_subscription,
callback=priority_callback,
# adjust according to latency/throughput requirements
flow_control=FlowControl(max_messages=2)
)
pull_future.result()
Example output when there is a backlog of priority and batch messages:
...
2021-07-29 10:25:00,115 PRIORITY received: 2786647736421842
2021-07-29 10:25:00,338 PRIORITY handled: 2786647736421841
2021-07-29 10:25:00,392 PRIORITY received: 2786647736421843
2021-07-29 10:25:02,899 BATCH modifyed deadline: 2786667941800415 - count: 2
2021-07-29 10:25:03,016 BATCH modifyed deadline: 2786667941800416 - count: 2
2021-07-29 10:25:03,016 BATCH modifyed deadline: 2786667941800417 - count: 2
2021-07-29 10:25:03,109 BATCH modifyed deadline: 2786667941800418 - count: 2
2021-07-29 10:25:03,109 BATCH modifyed deadline: 2786667941800419 - count: 2
2021-07-29 10:25:03,654 PRIORITY handled: 2786647736421842
2021-07-29 10:25:03,703 PRIORITY received: 2786647736421844
2021-07-29 10:25:03,906 PRIORITY handled: 2786647736421843
2021-07-29 10:25:03,948 PRIORITY received: 2786647736421845
2021-07-29 10:25:07,212 PRIORITY handled: 2786647736421844
2021-07-29 10:25:07,242 PRIORITY received: 2786647736421846
2021-07-29 10:25:07,459 PRIORITY handled: 2786647736421845
2021-07-29 10:25:07,503 PRIORITY received: 2786647736421847
2021-07-29 10:25:10,764 PRIORITY handled: 2786647736421846
2021-07-29 10:25:10,807 PRIORITY received: 2786647736421848
2021-07-29 10:25:11,004 PRIORITY handled: 2786647736421847
2021-07-29 10:25:11,061 PRIORITY received: 2786647736421849
2021-07-29 10:25:12,900 BATCH modifyed deadline: 2786667941800415 - count: 3
2021-07-29 10:25:13,016 BATCH modifyed deadline: 2786667941800416 - count: 3
2021-07-29 10:25:13,017 BATCH modifyed deadline: 2786667941800417 - count: 3
2021-07-29 10:25:13,110 BATCH modifyed deadline: 2786667941800418 - count: 3
2021-07-29 10:25:13,110 BATCH modifyed deadline: 2786667941800419 - count: 3
2021-07-29 10:25:14,392 PRIORITY handled: 2786647736421848
2021-07-29 10:25:14,437 PRIORITY received: 2786647736421850
2021-07-29 10:25:14,558 PRIORITY handled: 2786647736421849
...

Related

Camel reactive streams not completing when subscribed more than once

#Component
class TestRoute(
context: CamelContext,
) : EndpointRouteBuilder() {
val streamName: String = "news-ticker-stream"
val logger = LoggerFactory.getLogger(TestRoute::class.java)
val camel: CamelReactiveStreamsService = CamelReactiveStreams.get(context)
var count = 0L
val subscriber: Subscriber<String> =
camel.streamSubscriber(streamName, String::class.java)
override fun configure() {
from("timer://foo?fixedRate=true&period=30000")
.process {
count++
logger.info("Start emitting data for the $count time")
Flux.fromIterable(
listOf(
"APPLE", "MANGO", "PINEAPPLE"
)
)
.doOnComplete {
logger.info("All the data are emitted from the flux for the $count time")
}
.subscribe(
subscriber
)
}
from(reactiveStreams(streamName))
.to("file:outbox")
}
}
2022-07-07 13:01:44.626 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 1 time
2022-07-07 13:01:44.640 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : All the data are emitted from the flux for the 1 time
2022-07-07 13:01:44.646 INFO 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : Reactive stream 'news-ticker-stream' completed
2022-07-07 13:02:14.616 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 2 time
2022-07-07 13:02:44.610 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 3 time
2022-07-07 13:02:44.611 WARN 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : There is another active subscription: cancelled
The reactive stream are not getting completed when running for more than 1 times. So, as you can see in the logs the log message which I have added doOnComplete is only coming for the first time when timer route was triggered. When the timer route is triggered for the second time then there is no completion message. I tried to put the break point in the ReactiveStreamsCamelSubscriber, and found that for the 1st time the flow is going into the onNext() and onComplete() methods but the flow is not going into these method when the timer ran for 2nd time. I am not able to understand why this scenario is playing out?

How do i record JANUS signal as wav file?

I am testing an interoperability between modems. one of my modem did support JANUS and I believe UnetStack base Subnero Modem Phy[3] also support JANUS. How can i send and record JANUS signal which i can use for preliminary testing for other modem ? Can someone please provide basic snippet ?
UnetStack indeed has an implementation of JANUS that is, by default, configured on phy[3].
You can check this on your modem (the sample outputs here are from unet audio SDOAM, and so your modem parameters might vary somewhat):
> phy[3]
« PHY »
[org.arl.unet.phy.PhysicalChannelParam]
fec = 7
fecList ⤇ [LDPC1, LDPC2, LDPC3, LDPC4, LDPC5, LDPC6, ICONV2]
frameDuration ⤇ 1.1
frameLength = 8
janus = true
[org.arl.yoda.FhbfskParam]
chiplen = 1
fmin = 9520.0
fstep = 160.0
hops = 13
scrambler = 0
sync = true
tukey = true
[org.arl.yoda.ModemChannelParam]
modulation = fhbfsk
preamble = (2400 samples)
threshold = 0.0
(I have dropped a few parameters that are not relevant to the discussion here to keep the output concise)
The key parameters to take note of:
modulation = fhbfsk and janus = true setup the modulation for JANUS
fmin = 9520.0, fstep = 160.0 and hops = 13 are the modulation parameters to setup fhbfsk as required by JANUS
fec = 7 chooses ICONV2 from the fecList, as required by JANUS
threshold = 0.0 indicates that reception of JANUS frames is disabled
NOTE: If your modem is a Subnero M25 series, the standard JANUS band is out of the modem's ~20-30 kHz operating band. In that case, the JANUS scheme is auto-configured to a higher frequency (which you will see as fmin in your modem). Do note that this frequency is important to match for interop with any other modem that might support JANUS at a higher frequency band.
To enable JANUS reception, you need to:
phy[3].threshold = 0.3
To avoid any other detections from CONTROL and DATA packets, we might want to disable those:
phy[1].threshold = 0
phy[2].threshold = 0
At this point, you could make a transmission by typing phy << new TxJanusFrameReq() and put a hydrophone next to the modem to record the transmitted signal as a wav file.
However, I'm assuming you would prefer to record on the modem itself, rather than with an external hydrophone. To do that, you can enable the loopback mode on the modem, and set up the modem to record the received signal:
phy.loopback = true # enable loopback
phy.fullduplex = true # enable full duplex so we can record while transmitting
phy[3].basebandRx = true # enable capture of received baseband signal
subscribe phy # show notifications from phy on shell
Now if you do a transmission, you should see a RxBasebandSignalNtf with the captured signal:
> phy << new TxJanusFrameReq()
AGREE
phy >> RxFrameStartNtf:INFORM[type:#3 rxTime:492455709 rxDuration:1100000 detector:0.96]
phy >> TxFrameNtf:INFORM[type:#3 txTime:492456016]
phy >> RxJanusFrameNtf:INFORM[type:#3 classUserID:0 appType:0 appData:0 mobility:false canForward:true txRxFlag:true rxTime:492455708 rssi:-44.2 cfo:0.0]
phy >> RxBasebandSignalNtf:INFORM[adc:1 rxTime:492455708 rssi:-44.2 preamble:3 fc:12000.0 fs:12000.0 (13200 baseband samples)]
That notification has your signal in baseband complex format. You can save it to a file:
save 'x.txt', ntf.signal, 2
To convert to a wav file, you'll need to load this signal and convert to passband. Here's some example Python code to do this:
import numpy as np
import scipy.io.wavfile as wav
import arlpy.signal as asig
x = np.genfromtxt('x.txt', delimiter=',')
x = x[:,0] + 1j * x[:,1]
x = asig.bb2pb(x, 12000, 12000, 96000)
wav.write('x.wav', 96000, x)
NOTE: You will need to replace the fd and fc of 12000 respectively, by whatever is the fs and fc fields in your modem's RxBasebandSignalNtf. For Unet audio, it is 12000 for both, but for Subnero M25 series modems it is probably 24000.
Now you have your wav file at 96 kSa/s!
You could also plot a spectrogram to check if you wanted to:
import arlpy.plot as plt
plt.specgram(x, fs=96000)
I have an issue while recording the signal. Modem refuse to send the JANUS frame. It looks like something is not correctly set on my end, specially fmin = 12000.0 , fstep = 160.0 and hops = 13. The Actual modem won't let me set the fmin to 9520.0 and automatically configured on lowest fmin = 12000. How can i calculate corresponding parameters for fmin=12000.
Although your suggestion do work on the unet audio.
Here is my modem logs:
> phy[3]
« PHY »
[org.arl.unet.DatagramParam]
MTU ⤇ 0
RTU ⤇ 0
[org.arl.unet.phy.PhysicalChannelParam]
dataRate ⤇ 64.0
errorDetection ⤇ true
fec = 7
fecList ⤇ [LDPC1, LDPC2, LDPC3, LDPC4, LDPC5, LDPC6, ICONV2]
frameDuration ⤇ 1.0
frameLength = 8
janus = true
llr = false
maxFrameLength ⤇ 56
powerLevel = -10.0
[org.arl.yoda.FhbfskParam]
chiplen = 1
fmin = 12000.0
fstep = 160.0
hops = 13
scrambler = 0
sync = true
tukey = true
[org.arl.yoda.ModemChannelParam]
basebandExtra = 0
basebandRx = true
modulation = fhbfsk
preamble = (2400 samples)
test = false
threshold = 0.3
valid ⤇ false
> phy << new TxJanusFrameReq()
REFUSE: Frame type not setup correctly
phy >> FAILURE: Timed out

Apache Camel: Idempotent SFTP is throwing exception on other node unnecessarily? (Infinispan)

Following is code:
return "sftp://"+getSftpHostName() +getSftpImportDirectory()
+ "?username="+getSftpUserName()
+ "&password="+getSftpPassword() // Stored on wildfly server
+ "&download=true" //Shall be read chunk by chunk to avoid heap space issues. Earlier download=true was used: Harpreet
+ "&useList=true"
+ "&stepwise=false"
+ "&disconnect=true"
+ "&passiveMode=true"
+ "&reconnectDelay=10000"
+ "&bridgeErrorHandler=true"
+ "&delay="+getSftpDelay()
+ "&include="+ getSftpFileName()
+ "&preMove=$simple{file:onlyname}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}.processing"
+ "&move="+getSftpSuccessDirectory()+"$simple{file:onlyname.noext}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}.success"
+ "&moveFailed="+getSftpFailedDirectory()+"$simple{file:onlyname.noext}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}.failed"
+ "&readLock=idempotent-changed"
+ "&idempotentRepository=#infinispan"
+ "&readLockRemoveOnCommit=true";
We have three nodes. This locks file on Node 1 and route starts all logs printed but on Node 2 it shows following exception only (Node 3 is fine):
Cannot rename file from: Hrm/test/From_HRM/import/Integrator_2.xml to: Hrm/test/From_HRM/import/Integrator_2.xml.2020-05-07T01-27-19.processing, StackTrace: org.apache.camel.component.file.GenericFileOperationFailedException: Cannot rename file from: Hrm/test/From_HRM/import/Integrator_2.xml to: Hrm/test/From_HRM/import/Integrator_2.xml.2020-05-07T01-27-19.processing
at org.apache.camel.component.file.remote.SftpOperations.renameFile(SftpOperations.java:467)
at org.apache.camel.component.file.strategy.GenericFileProcessStrategySupport.renameFile(GenericFileProcessStrategySupport.java:113)
at org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy.begin(GenericFileRenameProcessStrategy.java:45)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:360)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:137)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:219)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:183)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:174)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:101)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: 2: No such file
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2873)
at com.jcraft.jsch.ChannelSftp.rename(ChannelSftp.java:1950)
at org.apache.camel.component.file.remote.SftpOperations.renameFile(SftpOperations.java:463)
Why node 2 even trying to try that file? (Not able to understand what changes shall i do?)
(A small issue we face once a month i.e file gets loss and above error appears on all three nodes. If anyone can comment on same that would be help too)
Updated INfinispan Settings:
public class InfinispanIdempodentRepositoryProducer {
#Resource(lookup = "java:jboss/infinispan/container/camel")
private CacheContainer container;
#Produces
#Named("infinispan")
public InfinispanIdempotentRepository createInfinispanInfinispanIdempotentRepository() {
return new InfinispanIdempotentRepository(container, "camel-default-cache");
}
}
***Configuration***
The configuration of an infinispan cache container
Aliases
Default Cache: camel-default-cache
Module: org.jboss.as.clustering.infinispan
Statistics Enabled: false
***Async Operations***
Defines a thread pool used for asynchronous operations.
Keepalive Time: 60000MILLISECONDS
Max Threads: 25
Min Threads: 25
Queue Length: 1000
***Expiration***
Defines a thread pool used for for evictions.
Keepalive Time: 60000MILLISECONDS
Max Threads: 1
***Listener***
Defines a thread pool used for asynchronous cache listener notifications.
Keepalive Time: 60000MILLISECONDS
Max Threads: 1
Min Threads: 1
Queue Length: 100000
***Persistence***
Defines a thread pool used for interacting with the persistent store.
Keepalive Time: 60000MILLISECONDS
Max Threads: 4
***Remote Command***
Defines a thread pool used to execute remote commands.
Keepalive Time 60000MILLISECONDS
Max Threads: 200
Min Threads: 1
Queue Length: 0
***State Transfer***
Defines a thread pool used for for state transfer.
Keepalive Time: 60000MILLISECONDS
Max Threads: 60
Min Threads: 1
Queue Length: 0
***Transport***
Defines a thread pool used for asynchronous transport communication.
Keepalive Time: 60000MILLISECONDS
Max Threads: 25
Min Threads: 25
Queue Length: 100000
***JGroups***
The description of the transport used by this cache container
Channel: ee
Lock Timeout: 240000MILLISECONDS
More Settings in replicated cache:
***Locking***
Acquire Timeout:15000MILLISECONDS
Concurrency Level:1000
Isolation:READ_COMMITTED
Striping:false
***Partition Handling***
Enabled:false
***State TRansfer***
Chunk Size:512
Timeout:240000MILLISECONDS
***TRansaction***
Locking:OPTIMISTIC
Mode:FULL_XA
Stop Timeout:10000MILLISECONDS
***Expiration***
Interval:60000MILLISECONDS
Lifespan:-1MILLISECONDS
Max Idle:-1MILLISECONDS
***Attributes***
Module:
Remote Timeout:17500MILLISECONDS
Statistics Enabled:false
***Memory***
Size:-1

Why does Gatling stop simulation when any scenario exists and doesn't wait until the end?

Let's say I have this configuration
val scn = (name: String) => scenario(name)
.forever() {
.exec(request)
}
setUp(
scn("scn1").inject(atOnceUsers(1))
.throttle(
jumpToRps(1), holdFor(10 seconds)
),
scn("scn2").inject(atOnceUsers(1))
.throttle(jumpToRps(1), holdFor(20 seconds))
).protocols(http.baseURLs(url))
I would expect to run the whole simulation for 20 seconds - until all is finished. What actually happens is that the simulation is stopped after 10 seconds, right after the first scenario finishes.
---- Requests ------------------------------------------------------------------
> Global (OK=20 KO=0 )
> scn1 / HTTP Request (OK=10 KO=0 )
> scn2 / HTTP Request (OK=10 KO=0 )
---- scn1 ----------------------------------------------------------------------
[--------------------------------------------------------------------------] 0%
waiting: 0 / active: 1 / done:0
---- scn2 ----------------------------------------------------------------------
[--------------------------------------------------------------------------] 0%
waiting: 0 / active: 1 / done:0
================================================================================
Simulation foo.Bar completed in 10 seconds
To overcome this in general, I need to configure all scenarios that ends earlier then the final one to wait with zero throttle.
setUp(
scn.inject(atOnceUsers(1))
.throttle(
jumpToRps(1), holdFor(10 seconds),
jumpToRps(0), holdFor(10 seconds) // <-- added wait
),
scn.inject(atOnceUsers(1))
.throttle(jumpToRps(1), holdFor(20 seconds))
).protocols(http.baseURLs(url))
Is this expected behavior? What other options do I have to make my simulation run until all scenarios are finished or until maxDuration?
Possible explanation could be that Feeder loops on data and when there is no more data it exists. In this case call "circular" on your feeder so that it goes back to the top of the sequence once the end is reached

Use of pace in gatling to control rate

I have following scenario which has two requests (RequestOne and RequestTwo). It is setup to run for 3 users and 1 repetition. The simulation should have taken at least 20 seconds to finish as I am using 20 seconds as pacing. However, every time I run it, it finishes in less than 20 seconds. I tried with different values for pacing as well.
val Workload = scenario("Load Test")
.repeat(1, "repetition") {
pace(20 seconds)
.exitBlockOnFail {
.feed(requestIdFeeder)
.group("Load Test") {
.exec(session => {
session.set("url", spURL)
})
.group("RequestOne") {exec(requestOne)}
.feed(requestIdFeeder)
.group("RequestTwo") {exec(requestTwo)}
}
}
}
setUp(Workload.inject(atOnceUsers(3))).protocols(httpProtocol)
output
Simulation com.performance.LoadTest completed in 11 seconds
Found the problem. I used only 1 repetition so the scenario didn't need to wait for the 20sec pacing to complete and it exited early. Setting repetition to > 1 helped achieve the desired rate.
val Workload = scenario("Load Test")
.repeat(10, "repetition") {
pace(20 seconds)
.exitBlockOnFail {
So, if you want to achieve fixed number of transactions in your simulation, use repetition, otherwise use "forever (" as mentioned in gatling docoumentation to achieve consistent rate.
val Workload = scenario("Load Test")
.forever (
pace(20 seconds)
.exitBlockOnFail {

Resources