Write a pattern which groups events based on flink-cep - apache-flink

We are using flink-cep as a standalone library for finding out patterns in a list of events.
Given the following list of events:
val patientKey = "patient"
val hrKey = "hr"
// Event
val p1e1 = Event("hr", mapOf(patientKey to 1, hr to 1))
val p1e2 = Event("hr", mapOf(patientKey to 1, hr to 2))
val p2e1 = Event("hr", mapOf(patientKey to 2, hr to 1))
val p1e3 = Event("hr", mapOf(patientKey to 1, hr to 3))
val p2e2 = Event("hr", mapOf(patientKey to 2, hr to 2))
val p3e1 = Event("hr", mapOf(patientKey to 3, hr to 1))
val p2e3 = Event("hr", mapOf(patientKey to 2, hr to 3))
val p3e2 = Event("hr", mapOf(patientKey to 3, hr to 2))
val p3e3 = Event("hr", mapOf(patientKey to 3, hr to 3))
We would like to write a pattern which returns as matches:
first match: p1e1, p1e2, p1e3
second match: p2e1, p2e2, p2e3
third match: p3e1, p3e2, p3e3
As such, this seems to be doable running CEP in a flink environment with keyed streams, but how do we do it without keyed streams. We cannot deploy a full flink env as we are running on a constrained device.
We would like to get all the heart rates gathered for a patient within 5 seconds.
Thanks

You can get the effect of keying the stream by instead putting the keyed constraint into the pattern definition. Something like this, if you use SQL:
PATTERN (A B C) WITHIN INTERVAL '5' SECOND
DEFINE
A AS A.hr = 1
B AS B.patientKey = A.patientKey AND B.hr = 2
C AS C.patientKey = B.patientKey AND C.hr = 3
If you don't use SQL, the same logic applies. (For the contiguity, you'll need to specify followedBy rather than next, since you won't have partitioned the stream by patientKey.)
For what it's worth, I can't think of any operational or performance benefit that will come from avoiding keyed streams. (In fact, CEP always uses keyed state, even if you don't explicitly use keyed streams.) The use of keyed streams makes it possible to use a larger Flink cluster and operate in parallel, but doesn't require it.

Related

How to create my own layers on MONAI U-Net?

I'm using MONAI on Spyder Anaconda to build a U-Net network. I want to add/modify layers starting from this baseline.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = nets.UNet(
spatial_dims = 2,
in_channels = 3,
out_channels = 1,
channels = (4, 8, 16, 32, 64),
strides = (2, 2, 2, 2),
num_res_units = 3,
norm = layers.Norm.BATCH,
kernel_size=3,).to(device)
loss_function = losses.DiceLoss()
torch.backends.cudnn.benchmark = True
optimizer = torch.optim.Adam(model.parameters(), lr = 1e-4, weight_decay = 0)
post_pred = Compose([EnsureType(), Activations(sigmoid = True), AsDiscrete(threshold=0.5)])
post_label = Compose([EnsureType()])
inferer = SimpleInferer()
utils.set_determinism(seed=46)
My final aim is to create a MultiResUNet that has different layers such as:
class Conv2d_batchnorm(torch.nn.Module):
'''
2D Convolutional layers
Arguments:
num_in_filters {int} -- number of input filters
num_out_filters {int} -- number of output filters
kernel_size {tuple} -- size of the convolving kernel
stride {tuple} -- stride of the convolution (default: {(1, 1)})
activation {str} -- activation function (default: {'relu'})
'''
def __init__(self, num_in_filters, num_out_filters, kernel_size, stride = (1,1), activation = 'relu'):
super().__init__()
self.activation = activation
self.conv1 = torch.nn.Conv2d(in_channels=num_in_filters, out_channels=num_out_filters, kernel_size=kernel_size, stride=stride, padding = 'same')
self.batchnorm = torch.nn.BatchNorm2d(num_out_filters)
def forward(self,x):
x = self.conv1(x)
x = self.batchnorm(x)
if self.activation == 'relu':
return torch.nn.functional.relu(x)
else:
return x
This is just an example of a different Conv2d layer that I would use instead of the native one of the baseline.
Hope some of you can figure out how to proceed.
Thanks, Fede

Flink SlidingEventTimeWindows doesnt work as expected

I have a stream execution configured as
object FlinkSlidingEventTimeExample extends App {
case class Trx(timestamp:Long, id:String, trx:String, count:Int)
val env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI()
val watermarkS1 = WatermarkStrategy
.forBoundedOutOfOrderness[Trx](Duration.ofSeconds(15))
.withTimestampAssigner(new SerializableTimestampAssigner[Trx] {
override def extractTimestamp(element: Trx, recordTimestamp: Long): Long = element.timestamp
})
val s1 = env.socketTextStream("localhost", 9999)
.flatMap(l => l.split(" "))
.map(l => Trx(timestamp = l.split(",")(0).toLong, id = l.split(",")(1), trx = l.split(",")(2), count = 1))
.assignTimestampsAndWatermarks(watermarkS1)
.keyBy(l => l.id)
.window(SlidingEventTimeWindows.of(Time.seconds(20),Time.seconds(5))) // Not working
//.window(SlidingProcessingTimeWindows.of(Time.seconds(20),Time.seconds(5))) // Working
.sum("count")
.print
env.execute("FlinkSlidingEventTimeExample")
}
I have already defined a watermark, but couldn't figure out why it is not producing anything. Does anyone has any ideas? My flink version is 1.14.0
My build.sbt is like below:
scalaVersion := "2.12.15"
libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" % "1.14.0"
libraryDependencies += "org.apache.flink" %% "flink-runtime-web" % "1.14.0"
libraryDependencies += "org.apache.flink" %% "flink-clients" % "1.14.0"
libraryDependencies += "org.apache.flink" % "flink-queryable-state-runtime" % "1.14.0"
I am entering input data from socket(port:9999) like below:
1640375790000,1,trx1
1640375815000,1,trx2
1640375841000,1,trx3
1640375741000,1,trx4
tried to give larger timestamp than window size, but still not working.
Flink Web UI screenshot:
web-ui
watermarks
Earlier answer deleted; it was based on faulty assumptions about the setup.
When event time windows fail to produce results it's always something to do with watermarking.
The timestamps in your input correspond to
December 24, 2021 19:56:30
December 24, 2021 19:56:55
December 24, 2021 19:57:21
December 24, 2021 19:55:41
so there's more than enough data to trigger the closure of several sliding windows. For example, trx2 has a large enough timestamp that it can generate a watermark large enough to close these windows that contain 19:56:30:
19:56:15 - 19:56:34.999
19:56:20 - 19:56:39.999
However, your execution graph looks something like this:
The problem is the rebalance between the socket source and the task that follows (the one doing flatmap -> map -> watermarks). Each of your four events is going to a different instance of the watermark strategy, and some instances aren't receiving any events. That's why there are no watermarks being generated.
What you want to do instead is to chain the input parsing and watermark generation to the source at the same parallelism, so that your execution graph looks like this instead:
This code will do that:
env
.socketTextStream("localhost", 9999)
.map(l => {
val input = l.split(",")
Trx(timestamp = input(0).toLong, id = input(1), trx = input(2), count = 1)
})
.setParallelism(1)
.assignTimestampsAndWatermarks(watermarkS1)
.setParallelism(1)
.keyBy(l => l.id)
.window(SlidingEventTimeWindows.of(Time.seconds(20), Time.seconds(5)))
.sum("count")
.print
In general it's not necessary to do watermarking at a parallelism of one, but it is necessary that every instance of the watermark generator either has enough events to work with, or is configured with withIdleness. (And if every instance is idle then you won't get any results either.)

Tensorflow Probability Logistic Regression Example

I feel I must be missing something obvious, in struggling to get a positive control for logistic regression going in tensorflow probability.
I've modified the example for logistic regression here, and created a positive control features and labels data. I struggle to achieve accuracy over 60%, however this is an easy problem for a 'vanilla' Keras model (accuracy 100%). What am I missing? I tried different layers, activations, etc.. With this method of setting up the model, is posterior updating actually being performed? Do I need to specify an interceptor object? Many thanks..
### Added positive control
nSamples = 80
features1 = np.float32(np.hstack((np.reshape(np.ones(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features2 = np.float32(np.hstack((np.reshape(np.zeros(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features = np.vstack((features1, features2))
labels = np.concatenate((np.zeros(40), np.ones(40)))
featuresInt, labelsInt = build_input_pipeline(features, labels, 10)
###
#w_true, b_true, features, labels = toy_logistic_data(FLAGS.num_examples, 2)
#featuresInt, labelsInt = build_input_pipeline(features, labels, FLAGS.batch_size)
with tf.name_scope("logistic_regression", values=[featuresInt]):
layer = tfp.layers.DenseFlipout(
units=1,
activation=None,
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn())
logits = layer(featuresInt)
labels_distribution = tfd.Bernoulli(logits=logits)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(labelsInt))
kl = sum(layer.losses)
elbo_loss = neg_log_likelihood + kl
predictions = tf.cast(logits > 0, dtype=tf.int32)
accuracy, accuracy_update_op = tf.metrics.accuracy(
labels=labelsInt, predictions=predictions)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(elbo_loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
# Fit the model to data.
for step in range(FLAGS.max_steps):
_ = sess.run([train_op, accuracy_update_op])
if step % 100 == 0:
loss_value, accuracy_value = sess.run([elbo_loss, accuracy])
print("Step: {:>3d} Loss: {:.3f} Accuracy: {:.3f}".format(
step, loss_value, accuracy_value))
### Check with basic Keras
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
optimizer = tf.train.AdamOptimizer(5e-2)
kerasModel.compile(optimizer = optimizer, loss = 'binary_crossentropy',
metrics = ['accuracy'])
kerasModel.fit(features, labels, epochs = 50) #100% accuracy
Compared to the github example, you forgot to divide by the number of examples when defining the KL divergence:
kl = sum(layer.losses) / FLAGS.num_examples
When I change this to your code, I quickly get to an accuracy of 99.9% on your toy data.
Additionaly, the output layer of your Keras model actually expects a sigmoid activation for this problem (binary classification):
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid')])
It's a toy problem, but you will notice that the model gets to 100% accuracy faster with a sigmoid activation.

Scala read only certain parts of file

I'm trying to read an input file in Scala that I know the structure of, however I only need every 9th entry. So far I have managed to read the whole thing using:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
The issue, this leaves me with an array that is huge (we're talking 20GB of data). Not only have I seen myself forced to write some very ugly code in order to convert between RDD[Array[String]] and Array[String] but it's essentially made my code useless.
I've tried different approaches and mixes between using
.map()
.flatMap() and
.reduceByKey()
however nothing actually put my collected "cells" into the format that I need them to be.
Here's what is supposed to happen: Reading a folder of text files from our server, the code should read each "line" of text in the format:
*---------*
| NASDAQ: |
*---------*
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
and only keep a hold of the stock_symbol as that is the identifier I'm counting. So far my attempts have been to turn the entire thing into an array only collect every 9th index from the first one into a collected_cells var. Issue is, based on my calculations and real life results, that code would take 335 days to run (no joke).
Here's my current code for reference:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SparkNum {
def main(args: Array[String]) {
// Do some Scala voodoo
val sc = new SparkContext(new SparkConf().setAppName("Spark Numerical"))
// Set input file as per HDFS structure + input args
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
var collected_cells:Array[String] = new Array[String](0)
//println("[MESSAGE] Length of CC: " + collected_cells.length)
val divider:Long = 9
val array_length = fields.count / divider
val casted_length = array_length.toInt
val indexedFields = fields.zipWithIndex
val indexKey = indexedFields.map{case (k,v) => (v,k)}
println("[MESSAGE] Number of lines: " + array_length)
println("[MESSAGE] Casted lenght of: " + casted_length)
for( i <- 1 to casted_length ) {
println("[URGENT DEBUG] Processin line " + i + " of " + casted_length)
var index = 9 * i - 8
println("[URGENT DEBUG] Index defined to be " + index)
collected_cells :+ indexKey.lookup(index)
}
println("[MESSAGE] collected_cells size: " + collected_cells.length)
val single_cells = collected_cells.flatMap(collected_cells => collected_cells);
val counted_cells = single_cells.map(cell => (cell, 1).reduceByKey{case (x, y) => x + y})
// val result = counted_cells.reduceByKey((a,b) => (a+b))
// val inmem = counted_cells.persist()
//
// // Collect driver into file to be put into user archive
// inmem.saveAsTextFile("path to server location")
// ==> Not necessary to save the result as processing time is recorded, not output
}
}
The bottom part is currently commented out as I tried to debug it, but it acts as pseudo-code for me to know what I need done. I may want to point out that I am next to not at all familiar with Scala and hence things like the _ notation confuse the life out of me.
Thanks for your time.
There are some concepts that need clarification in the question:
When we execute this code:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
That does not result in a huge array of the size of the data. That expression represents a transformation of the base data. It can be further transformed until we reduce the data to the information set we desire.
In this case, we want the stock_symbol field of a record encoded a csv:
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
I'm also going to assume that the data file contains a banner like this:
*---------*
| NASDAQ: |
*---------*
The first thing we're going to do is to remove anything that looks like this banner. In fact, I'm going to assume that the first field is the name of a stock exchange that start with an alphanumeric character. We will do this before we do any splitting, resulting in:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields = validLines.map(line => line.split(","))
It helps to write the types of the variables, to have peace of mind that we have the data types that we expect. As we progress in our Scala skills that might become less important. Let's rewrite the expression above with types:
val lines: RDD[String] = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines: RDD[String] = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields: RDD[Array[String]] = validLines.map(line => line.split(","))
We are interested in the stock_symbol field, which positionally is the element #1 in a 0-based array:
val stockSymbols:RDD[String] = fields.map(record => record(1))
If we want to count the symbols, all that's left is to issue a count:
val totalSymbolCount = stockSymbols.count()
That's not very helpful because we have one entry for every record. Slightly more interesting questions would be:
How many different stock symbols we have?
val uniqueStockSymbols = stockSymbols.distinct.count()
How many records for each symbol do we have?
val countBySymbol = stockSymbols.map(s => (s,1)).reduceByKey(_+_)
In Spark 2.0, CSV support for Dataframes and Datasets is available out of the box
Given that our data does not have a header row with the field names (what's usual in large datasets), we will need to provide the column names:
val stockDF = sparkSession.read.csv("/tmp/quotes_clean.csv").toDF("exchange", "symbol", "date", "open", "close", "volume", "price")
We can answer our questions very easy now:
val uniqueSymbols = stockDF.select("symbol").distinct().count
val recordsPerSymbol = stockDF.groupBy($"symbol").agg(count($"symbol"))

How to loop through table based on unique date in MATLAB

I have this table named BondData which contains the following:
Settlement Maturity Price Coupon
8/27/2016 1/12/2017 106.901 9.250
8/27/2019 1/27/2017 104.79 7.000
8/28/2016 3/30/2017 106.144 7.500
8/28/2016 4/27/2017 105.847 7.000
8/29/2016 9/4/2017 110.779 9.125
For each day in this table, I am about to perform a certain task which is to assign several values to a variable and perform necessary computations. The logic is like:
do while Settlement is the same
m_settle=current_row_settlement_value
m_maturity=current_row_maturity_value
and so on...
my_computation_here...
end
It's like I wanted to loop through my settlement dates and perform task for as long as the date is the same.
EDIT: Just to clarify my issue, I am implementing Yield Curve fitting using Nelson-Siegel and Svensson models.Here are my codes so far:
function NS_SV_Models()
load bondsdata
BondData=table(Settlement,Maturity,Price,Coupon);
BondData.Settlement = categorical(BondData.Settlement);
Settlements = categories(BondData.Settlement); % get all unique Settlement
for k = 1:numel(Settlements)
rows = BondData.Settlement==Settlements(k);
Bonds.Settle = Settlements(k); % current_row_settlement_value
Bonds.Maturity = BondData.Maturity(rows); % current_row_maturity_value
Bonds.Prices=BondData.Price(rows);
Bonds.Coupon=BondData.Coupon(rows);
Settle = Bonds.Settle;
Maturity = Bonds.Maturity;
CleanPrice = Bonds.Prices;
CouponRate = Bonds.Coupon;
Instruments = [Settle Maturity CleanPrice CouponRate];
Yield = bndyield(CleanPrice,CouponRate,Settle,Maturity);
NSModel = IRFunctionCurve.fitNelsonSiegel('Zero',Settlements(k),Instruments);
SVModel = IRFunctionCurve.fitSvensson('Zero',Settlements(k),Instruments);
NSModel.Parameters
SVModel.Parameters
end
end
Again, my main objective is to get each model's parameters (beta0, beta1, beta2, etc.) on a per day basis. I am getting an error in Instruments = [Settle Maturity CleanPrice CouponRate]; because Settle contains only one record (8/27/2016), it's suppose to have two since there are two rows for this date. Also, I noticed that Maturity, CleanPrice and CouponRate contains all records. They should only contain respective data for each day.
Hope I made my issue clearer now. By the way, I am using MATLAB R2015a.
Use categorical array. Here is your function (without its' headline, and all rows I can't run are commented):
BondData = table(datetime(Settlement),datetime(Maturity),Price,Coupon,...
'VariableNames',{'Settlement','Maturity','Price','Coupon'});
BondData.Settlement = categorical(BondData.Settlement);
Settlements = categories(BondData.Settlement); % get all unique Settlement
for k = 1:numel(Settlements)
rows = BondData.Settlement==Settlements(k);
Settle = BondData.Settlement(rows); % current_row_settlement_value
Mature = BondData.Maturity(rows); % current_row_maturity_value
CleanPrice = BondData.Price(rows);
CouponRate = BondData.Coupon(rows);
Instruments = [datenum(char(Settle)) datenum(char(Mature))...
CleanPrice CouponRate];
% Yield = bndyield(CleanPrice,CouponRate,Settle,Mature);
%
% NSModel = IRFunctionCurve.fitNelsonSiegel('Zero',Settlements(k),Instruments);
% SVModel = IRFunctionCurve.fitSvensson('Zero',Settlements(k),Instruments);
%
% NSModel.Parameters
% SVModel.Parameters
end
Keep in mind the following:
You cannot concat different types of variables as you try to do in: Instruments = [Settle Maturity CleanPrice CouponRate];
There is no need in the structure Bond, you don't use it (e.g. Settle = Bonds.Settle;).
Use the relevant functions to convert between a datetime object and string or numbers. For instance, in the code above: datenum(char(Settle)). I don't know what kind of input you need to pass to the following functions.

Resources