I've made quite a simple implementation with Akka.net Streams using Sink.ActorRefWithAck: a subscriber asks for a large string to a publisher which sends it by slices.
It works perfectly fine locally (UT) but not remotely. And I cannot understand what's wrong? Concretly: the subscriber is able to send the request to the publisher which responds with an OnInit message but then the OnInit.Ack will never goes back to the publisher. This Ack message ends up as a dead letter:
INFO Akka.Actor.EmptyLocalActorRef - Message Ack from akka.tcp://OutOfProcessTaskProcessing#localhost:12100/user/Streamer_636568240846733287 to akka://OutOfProcessTaskProcessing/user/StreamSupervisor-0/StageActorRef-0 was not delivered. 1 dead letters encountered.
Note that the log is from the destination actor so the message is handled in the right process. There is no obvious path error.
Looking at the publisher code which does not handle this message, I really don't know what I'm doing wrong:
public static void ReplyWithStreamedString(IUntypedActorContext context, string toStream, int chunkSize = 2000)
{
Source<string, NotUsed> source = Source.From(toStream.SplitBy(chunkSize));
source.To(Sink.ActorRefWithAck<string>(context.Sender, new StreamMessage.OnInit(),
new StreamMessage.OnInit.Ack(),
new StreamMessage.Completed(),
exception => new StreamMessage.Failure(exception.Message)))
.Run(context.System.Materializer());
}
Here is the subscriber code:
public static Task<string> AskStreamedString(this ICanTell self, object message, ActorSystem context, TimeSpan? timeout = null)
{
var tcs = new TaskCompletionSource<string>();
if (timeout.HasValue)
{
CancellationTokenSource ct = new CancellationTokenSource(timeout.Value);
ct.Token.Register(() => tcs.TrySetCanceled());
}
var props = Props.Create(() => new StreamerActorRef(tcs));
var tempActor = context.ActorOf(props, $"Streamer_{DateTime.Now.Ticks}");
self.Tell(message, tempActor);
return tcs.Task.ContinueWith(task =>
{
context.Stop(tempActor);
if(task.IsCanceled)
throw new OperationCanceledException();
if (task.IsFaulted)
throw task.Exception.GetBaseException();
return task.Result;
});
}
internal class StreamerActorRef : ReceiveActor
{
readonly TaskCompletionSource<string> _tcs;
private readonly StringBuilder _stringBuilder = new StringBuilder();
public StreamerActorRef(TaskCompletionSource<string> tcs)
{
_tcs = tcs;
Ready();
}
private void Ready()
{
ReceiveAny(message =>
{
switch (message)
{
case StreamMessage.OnInit _:
Sender.Tell(new StreamMessage.OnInit.Ack());
break;
case StreamMessage.Completed _:
string result = _stringBuilder.ToString();
_tcs.TrySetResult(result);
break;
case string slice:
_stringBuilder.Append(slice);
Sender.Tell(new StreamMessage.OnInit.Ack());
break;
case StreamMessage.Failure error:
_tcs.TrySetException(new InvalidOperationException(error.Reason));
break;
}
});
}
}
With messages:
public class StreamMessage
{
public class OnInit
{
public class Ack{}
}
public class Completed { }
public class Failure
{
public string Reason { get; }
public Failure(string reason)
{
Reason = reason;
}
}
}
In general sources and sinks working with actor refs have not been designed to work over remote connections - they don't cover message retries, which can cause deadlocks in your system if some stream control message won't be passed in.
The feature you're looking for is called StreamRefs (which works like actor refs, but for streams), and will be shipped as part of v1.4 release (see github pull request for more details).
Related
I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false
Can someone please help me understand when and how is the window (session) in flink happens? Or how the samples are processed?
For instance, if I have a continuous stream of events flowing in, events being request coming in an application and response provided by the application.
As part of the flink processing we need to understand how much time is taken for serving a request.
I understand that there are time tumbling windows which gets triggered every n seconds which is configured and as soon as the time lapses then all the events in that time window will be aggregated.
So for example:
Let's assume that the time window defined is 30 seconds and if an event arrives at t time and another arrives at t+30 then both will be processed, but an event arrivng at t+31 will be ignored.
Please correct if I am not right in saying the above statement.
Question on the above is: if say an event arrives at t time and another event arrives at t+3 time, will it still wait for entire 30 seconds to aggregate and finalize the results?
Now in case of session window, how does this work? If the event are being processed individually and the broker time stamp is used as session_id for the individual event at the time of deserialization, then the session window will that be created for each event? If yes then do we need to treat request and response events differently because if we don't then doesn't the response event will get its own session window?
I will try posting my example (in java) that I am playing with in short time but any inputs on the above points will be helpful!
process function
DTO's:
public class IncomingEvent{
private String id;
private String eventId;
private Date timestamp;
private String component;
//getters and setters
}
public class FinalOutPutEvent{
private String id;
private long timeTaken;
//getters and setters
}
===============================================
Deserialization of incoming events:
public class IncomingEventDeserializationScheme implements KafkaDeserializationSchema {
private ObjectMapper mapper;
public IncomingEventDeserializationScheme(ObjectMapper mapper) {
this.mapper = mapper;
}
#Override
public TypeInformation<IncomingEvent> getProducedType() {
return TypeInformation.of(IncomingEvent.class);
}
#Override
public boolean isEndOfStream(IncomingEvent nextElement) {
return false;
}
#Override
public IncomingEvent deserialize(ConsumerRecord<byte[], byte[]> record) throws Exception {
if (record.value() == null) {
return null;
}
try {
IncomingEvent event = mapper.readValue(record.value(), IncomingEvent.class);
if(event != null) {
new SessionWindow(record.timestamp());
event.setOffset(record.offset());
event.setTopic(record.topic());
event.setPartition(record.partition());
event.setBrokerTimestamp(record.timestamp());
}
return event;
} catch (Exception e) {
return null;
}
}
}
===============================================
main logic
public class MyEventJob {
private static final ObjectMapper mapper = new ObjectMapper();
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
MyEventJob eventJob = new MyEventJob();
InputStream inStream = eventJob.getFileFromResources("myConfig.properties");
ParameterTool parameter = ParameterTool.fromPropertiesFile(inStream);
Properties properties = parameter.getProperties();
Integer timePeriodBetweenEvents = 120;
String outWardTopicHostedOnServer = localhost:9092";
DataStreamSource<IncomingEvent> stream = env.addSource(new FlinkKafkaConsumer<>("my-input-topic", new IncomingEventDeserializationScheme(mapper), properties));
SingleOutputStreamOperator<IncomingEvent> filteredStream = stream
.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<IncomingEvent>() {
long eventTime;
#Override
public long extractTimestamp(IncomingEvent element, long previousElementTimestamp) {
return element.getTimestamp();
}
#Override
public Watermark getCurrentWatermark() {
return new Watermark(eventTime);
}
})
.map(e -> { e.setId(e.getEventId()); return e; });
SingleOutputStreamOperator<FinalOutPutEvent> correlatedStream = filteredStream
.keyBy(new KeySelector<IncomingEvent, String> (){
#Override
public String getKey(#Nonnull IncomingEvent input) throws Exception {
return input.getId();
}
})
.window(GlobalWindows.create()).allowedLateness(Time.seconds(defaultSliceTimePeriod))
.trigger( new Trigger<IncomingEvent, Window> (){
private final long sessionTimeOut;
public SessionTrigger(long sessionTimeOut) {
this.sessionTimeOut = sessionTimeOut;
}
#Override
public TriggerResult onElement(IncomingEvent element, long timestamp, Window window, TriggerContext ctx)
throws Exception {
ctx.registerProcessingTimeTimer(timestamp + sessionTimeOut);
return TriggerResult.CONTINUE;
}
#Override
public TriggerResult onProcessingTime(long time, Window window, TriggerContext ctx) throws Exception {
return TriggerResult.FIRE_AND_PURGE;
}
#Override
public TriggerResult onEventTime(long time, Window window, TriggerContext ctx) throws Exception {
return TriggerResult.CONTINUE;
}
#Override
public void clear(Window window, TriggerContext ctx) throws Exception {
//check the clear method implementation
}
})
.process(new ProcessWindowFunction<IncomingEvent, FinalOutPutEvent, String, SessionWindow>() {
#Override
public void process(String arg0,
ProcessWindowFunction<IncomingEvent, FinalOutPutEvent, String, SessionWindow>.Context arg1,
Iterable<IncomingEvent> input, Collector<FinalOutPutEvent> out) throws Exception {
List<IncomingEvent> eventsIn = new ArrayList<>();
input.forEach(eventsIn::add);
if(eventsIn.size() == 1) {
//Logic to handle incomplete request/response events
} else if (eventsIn.size() == 2) {
//Logic to handle the complete request/response and how much time it took
}
}
} );
FlinkKafkaProducer<FinalOutPutEvent> kafkaProducer = new FlinkKafkaProducer<>(
outWardTopicHostedOnServer, // broker list
"target-topic", // target topic
new EventSerializationScheme(mapper));
correlatedStream.addSink(kafkaProducer);
env.execute("Streaming");
}
}
Thanks
Vicky
From your description, I think you want to write a custom ProcessFunction, which is keyed by the session_id. You'll have a ValueState, where you store the timestamp for the request event. When you get the corresponding response event, you calculate the delta and emit that (with the session_id) and clear out state.
It's likely you'd also want to set a timer when you get the request event, so that if you don't get a response event in safe/long amount of time, you can emit a side output of failed requests.
So, with the default trigger, each window is finalized after it's time fully passes. Depending on whether You are using EventTime or ProcessingTime this may mean different things, but in general, Flink will always wait for the Window to be closed before it is fully processed. The event at t+31 in Your case would simply go to the other window.
As for the session windows, they are windows too, meaning that in the end they simply aggregate samples that have a difference between timestamps lower than the defined gap. Internally, this is more complicated than the normal windows, since they don't have defined starts and ends. The Session Window operator gets sample and creates a new Window for each individual sample. Then, the operator verifies, if the newly created window can be merged with already existing ones (i.e. if their timestamps are closer than the gap) and merges them. This finally results with window that has all elements with timestamps closer to each other than the defined gap.
You are making this more complicated than it needs to be. The example below will need some adjustment, but will hopefully convey the idea of how to use a KeyedProcessFunction rather than session windows.
Also, the constructor for BoundedOutOfOrdernessTimestampExtractor expects to be passed a Time maxOutOfOrderness. Not sure why you are overriding its getCurrentWatermark method with an implementation that ignores the maxOutOfOrderness.
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Event> events = ...
events
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks(OUT_OF_ORDERNESS))
.keyBy(e -> e.sessionId)
.process(new RequestReponse())
...
}
public static class RequestReponse extends KeyedProcessFunction<KEY, Event, Long> {
private ValueState<Long> requestTimeState;
#Override
public void open(Configuration config) {
ValueStateDescriptor<Event> descriptor = new ValueStateDescriptor<>(
"request time", Long.class);
requestState = getRuntimeContext().getState(descriptor);
}
#Override
public void processElement(Event event, Context context, Collector<Long> out) throws Exception {
TimerService timerService = context.timerService();
Long requestedAt = requestTimeState.value();
if (requestedAt == null) {
// haven't seen the request before; save its timestamp
requestTimeState.update(event.timestamp);
timerService.registerEventTimeTimer(event.timestamp + TIMEOUT);
} else {
// this event is the response
// emit the time elapsed between request and response
out.collect(event.timestamp - requestedAt);
}
}
#Override
public void onTimer(long timestamp, OnTimerContext context, Collector<Long> out) throws Exception {
//handle incomplete request/response events
}
}
public static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks(Time t) {
super(t);
}
#Override
public long extractTimestamp(Event event) {
return event.eventTime;
}
}
I'm trying to develop socket based application on wp7 (client) and WPF (server) and I have issue that I don't understand.
I've written "Server" class which should handle connecting with client and recieving strings.
The problem is that server recieving just first string sent by client and then the connection is breaking, I have to reset my client app (only client). I'm assuming it's server side problem because I'm rewriting server application using Async calls. Before that client works well. My server side code:
public class StateObject
{
public byte[] Buffer { get; set; }
public Socket WorkSocket { get; set; }
}
public class MessageRecievedEventArgs : EventArgs
{
public string Message { get; set; }
}
public class Server
{
ManualResetEvent _done;
TcpListener _listener;
public event EventHandler<MessageRecievedEventArgs> OnMessageRecieved;
public Server()
{
_done = new ManualResetEvent(false);
_listener = new TcpListener(IPAddress.Any, 4124);
}
public void Start()
{
Thread th = new Thread(StartListening);
th.IsBackground = true;
th.Start();
}
private void StartListening()
{
_listener.Start();
while (true)
{
_done.Reset();
_listener.BeginAcceptTcpClient(new AsyncCallback(OnConnected), _listener);
_done.WaitOne();
}
}
private void OnConnected(IAsyncResult result)
{
TcpListener listener = result.AsyncState as TcpListener;
Socket socket = listener.EndAcceptSocket(result);
byte[] buffer = new byte[256];
StateObject state = new StateObject { Buffer = buffer, WorkSocket = socket };
socket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFlags.None, new AsyncCallback(OnRead), state);
}
private void OnRead(IAsyncResult result)
{
var state = (StateObject)result.AsyncState;
int buffNum = state.WorkSocket.EndReceive(result);
string message = Encoding.UTF8.GetString(state.Buffer, 0, buffNum);
if (OnMessageRecieved != null)
{
MessageRecievedEventArgs args = new MessageRecievedEventArgs();
args.Message = message;
OnMessageRecieved(this, args);
}
_done.Set();
}
}
Client:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
try
{
base.OnNavigatedTo(e);
_socketEventArgs = new SocketAsyncEventArgs() { RemoteEndPoint = App.Connection.RemoteEndPoint };
Send("{ECHO}");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void Send(string key)
{
var bytes = Encoding.UTF8.GetBytes(key + "$");
_socketEventArgs.SetBuffer(bytes, 0, bytes.Count());
if (Socket.Connected)
Socket.SendAsync(_socketEventArgs);
else
MessageBox.Show("Application is not connected. Please reset connection (press 'back' key and 'connect' button). It may be needed to restart server application");
}
The "{ECHO}" message is sent by client and recieved by server - each next is sent, but not recieved. I assuming that I don't understand sockets async calls mechanism... can someone enlighten me? :)
It seems like you are only reading once. Probably you want to call read repeatedly to deplete the entire stream.
I want to define a SuggestBox, which behaves like the search bar in Google Maps: When you begin to type, real addresses, starting with the typed letters, appear.
I think, that I need to use the Geocoder.getLocations(String address, LocationCallback callback) method, but I have no idea how to connect this with the oracle, which is needed by the suggest box to produce its suggestions.
Can you please give me ideas how do I connect the getLocations Method with the SuggestOracle?
I solved this by implementing a subclass of SuggestBox, which has it's own SuggestOracle. The AddressOracle deals as a Wrapper for the Google Maps Service, for which the class Geocoderin the Google Maps API for GWT offers abstractions.
So here is my solution:
First we implement the Widget for a SuggestBox with Google Maps suggestions
public class GoogleMapsSuggestBox extends SuggestBox {
public GoogleMapsSuggestBox() {
super(new AddressOracle());
}
}
Then we implement the SuggestOracle, which wraps the Geocoder async method abstractions:
class AddressOracle extends SuggestOracle {
// this instance is needed, to call the getLocations-Service
private final Geocoder geocoder;
public AddressOracle() {
geocoder = new Geocoder();
}
#Override
public void requestSuggestions(final Request request,
final Callback callback) {
// this is the string, the user has typed so far
String addressQuery = request.getQuery();
// look up for suggestions, only if at least 2 letters have been typed
if (addressQuery.length() > 2) {
geocoder.getLocations(addressQuery, new LocationCallback() {
#Override
public void onFailure(int statusCode) {
// do nothing
}
#Override
public void onSuccess(JsArray<Placemark> places) {
// create an oracle response from the places, found by the
// getLocations-Service
Collection<Suggestion> result = new LinkedList<Suggestion>();
for (int i = 0; i < places.length(); i++) {
String address = places.get(i).getAddress();
AddressSuggestion newSuggestion = new AddressSuggestion(
address);
result.add(newSuggestion);
}
Response response = new Response(result);
callback.onSuggestionsReady(request, response);
}
});
} else {
Response response = new Response(
Collections.<Suggestion> emptyList());
callback.onSuggestionsReady(request, response);
}
}
}
And this is a special class for the oracle suggestions, which just represent a String with the delivered address.
class AddressSuggestion implements SuggestOracle.Suggestion, Serializable {
private static final long serialVersionUID = 1L;
String address;
public AddressSuggestion(String address) {
this.address = address;
}
#Override
public String getDisplayString() {
return this.address;
}
#Override
public String getReplacementString() {
return this.address;
}
}
Now you can bind the new widget into your web page by writing the following line in the onModuleLoad()-method of your EntryPoint-class:
RootPanel.get("hm-map").add(new GoogleMapsSuggestBox());
I have the following bit of code to set up my Rx hookups:
Event related definitions:
public class QueryEventArgs : EventArgs
{
public SomeParametersType SomeParameters
{
get;
set;
}
public object QueryContext
{
get;
set;
}
};
public delegate void QueryDelegate(object sender, QueryEventArgs e);
public event QueryDelegate QueryEvent;
Initialization:
queryObservable = Observable.FromEvent<QueryEventArgs>(this, "QueryEvent");
queryObservable.Subscribe((e) =>
{
tbQueryProgress.Text = "Querying... ";
client.QueryAsync(e.EventArgs.SomeParameters, e.EventArgs.QueryContext);
});
queryCompletedObservable = from e in Observable.FromEvent<QueryCompletedEventArgs>(client, "QueryCompleted").TakeUntil(queryObservable) select e;
queryCompletedObservable.Subscribe((e) =>
{
tbQueryProgress.Text = "Ready";
SilverlightClientService_QueryCompleted(e.Sender, e.EventArgs);
},
(Exception ex) =>
{
SetError("Query error: " + ex);
}
);
"client" is the WCF client and everything else is fairly self-explanatory.
The "TakeUntil" is there to stop the user stomping on himself when doing a new query while in the middle of a currently running one. However, while the code works if I remove the "TakeUntil" clause, if I put it in, the query is never completed.
Is this the correct pattern? If so, am I doing something wrong?
Cheers,
-Tim
TakeUntil terminates the subscription when a value is received from its argument, so your first queryObservable starts up the query but also terminates the subscription to the complete events.
The simpler solution is to setup an IObservable around your actual query, and then use Switch to ensure that only one query runs at a time.
public static class ClientExtensions
{
public static IObservable<QueryCompletedEventArgs> QueryObservable(
this QueryClient client,
object[] someParameters, object queryContext)
{
return Observable.CreateWithDisposable<QueryCompletedEventArgs>(observer =>
{
var subscription = Observable.FromEvent<QueryCompletedEventArgs>(
h => client.QueryCompleted += h,
h => client.QueryCompleted -= h
)
.Subscribe(observer);
client.QueryAsync(someParameters, queryContext);
return new CompositeDisposable(
subscription,
Disposable.Create(() => client.Abort())
);
});
}
}
Then you can do this:
queryObservable = Observable.FromEvent<QueryEventArgs>(this, "QueryEvent");
queryObservable
.Select(query => client.QueryObservable(
query.EventArgs.SomeParameters,
query.EventArgs.QueryContext
))
.Switch()
.Subscribe(queryComplete =>
{
tbQueryProgress.Text = "Ready";
// ... etc
});
This sets up one continuous flow, whereby each "Query" event starts a query which emits the complete event from that query. New queries automatically teriminate the previous query (if possible) and start a new one.