In the link agent, I came across attributes like maxPropagationDelay and reservationGuardTime. What is the role of these attributes? Where I can find more information about these attributes.
These are parameters of specific LINK protocols.
maxPropagationDelay is used to determine timeouts based on expected round-trip-times in the network. It should be set to a value that depends on the geographical size of your network if the network is small enough for a single hop connection between any pair of nodes. Otherwise it should be set to a value based on the maximum communication range of your modem.
reservationGuardTime is a small extra time that a channel is reserved for, to allow for practical timing jitter of modems. Usually the default value for this provided by the agent will be good enough for most purposes.
The Underwater networks handbook to be released with the upcoming version of UnetStack3 will provide a lot more guidance on many of these parameters, and on how to set up various types of networks using Unetstack.
You can access more information about any parameters of any of the Agents in UnetStack using the help command. For the Link Agent, you'll see this in UnetStack 1.4.
> help link
link - access to link agent
Examples:
link // access parameters
link.maxRetries = 5 // set maximum retries for reliable delivery
link << new DatagramReq(to: 2, data: [1,2,3], reliability: true)
// send reliable datagram
Parameters:
MTU - maximum data transfer size
maxRetries - maximum retries for reliable delivery
reservationGuardTime - guard period (s)
maxPropagationDelay - maximum propagation delay (s)
dataChannel - channel to use for data frames (0 = control, 1 = data)
reservationGuardTime is the additional guard time that can be added to the frame duration when reserving a channel (using MAC) to ensure channel reservations have some delay in between for the nodes to be able react.
maxPropagationDelay is used to estimate the maximum time that an acknowledge to a request (or a series of requests if fragmentation is needed) might take and used to set timeouts for transmissions, or to make channel reservations (if using a MAC). Depending on your simulation/setup, you can change this number to be longest time (one-way) between two nodes which can communicate.
Related
I have a use case where a large no of logs will be consumed to the apache flink CEP. My use case is to find the brute force attack and port scanning attack. The challenge here is that while in ordinary CEP we compare the value against a constant like "event" = login. In this case the Criteria is different as in the case of brute force attack we have the criteria as follows.
username is constant and event="login failure" (Delimiter the event happens 5 times within 5 minutes).
It means the logs with the login failure event is received for the same username 5 times within 5 minutes
And for port Scanning we have the following criteira.
ip address is constant and dest port is variable (Delimiter is the event happens 10 times within 1 minute). It means the logs with constant ip address is received for the 10 different ports within 1 minute.
With Flink, when you want to process the events for something like one username or one ip address in isolation, the way to do this is to partition the stream by a key, using keyBy(). The training materials in the Flink docs have a section on Keyed Streams that explains this part of the DataStream API in more detail. keyBy() is the roughly same concept as a GROUP BY in SQL, if that helps.
With CEP, if you first key the stream, then the pattern will be matched separately for each distinct value of the key, which is what you want.
However, rather than CEP, I would instead recommend Flink SQL, perhaps in combination with MATCH_RECOGNIZE, for this use case. MATCH_RECOGNIZE is a higher-level API, built on top of CEP, and it's easier to work with. In combination with SQL, the result is quite powerful.
You'll find some Flink SQL training materials and examples (including examples that use MATCH_RECOGNIZE) in Ververica's github account.
Update
To be clear, I wouldn't use MATCH_RECOGNIZE for these specific rules; neither it nor CEP is needed for this use case. I mentioned it in case you have other rules where it would be helpful. (My reason for not recommending CEP in this case is that implementing the distinct constraint might be messy.)
For example, for the port scanning case you can do something like this:
SELECT e1.ip, COUNT(DISTINCT e2.port)
FROM events e1, events e2
WHERE e1.ip = e2.ip AND timestampDiff(MINUTE, e1.ts, e2.ts) < 1
GROUP BY e1.ip HAVING COUNT(DISTINCT e2.port) >= 10;
The login case is similar, but easier.
Note that when working with streaming SQL, you should give some thought to state retention.
Further update
This query is likely to return a given IP address many times, but it's not desirable to generate multiple alerts.
This could be handled by inserting matching IP addresses into an Alert table, and only generate alerts for IPs that aren't already there.
Or the output of the SQL query could be processed by a de-duplicator implemented using the DataStream API, similar to the example in the Flink docs. If you only want to suppress duplicate alerts for some period of time, use a KeyedProcessFunction instead of a RichFlatMapFunction, and use a Timer to clear the state when it's time to re-enable alerts for a given IP.
Yet another update (concerning CEP and distinctness)
Implementing this with CEP should be possible. You'll want to key the stream by the IP address, and have a pattern that has to match within one minute.
The pattern can be roughly like this:
Pattern<Event, ?> pattern = Pattern
.<Event>begin("distinctPorts")
.where(iterative condition 1)
.oneOrMore()
.followedBy("end")
.where(iterative condition 2)
.within(1 minute)
The first iterative condition returns true if the event being added to the pattern has a distinct port from all of the previously matching events. Somewhat similar to the example here, in the docs.
The second iterative condition returns true if size("distinctPorts") >= 9 and this event also has yet another distinct port.
See this Flink Forward talk (youtube video) for a somewhat similar example at the end of the talk.
If you try this and get stuck, please ask a new question, showing us what you've tried and where you're stuck.
I am using Flink to read data from Apache Pulsar.
I have a partitioned topic in pulsar with 8 partitions.
I produced 1000 messages in this topic, distributed across the 8 partitions.
I have 8 cores in my laptop, so I have 8 sub-tasks (by default parallelism = # of cores).
I opened the Flink-UI after executing the code from Eclipse, I found that some sub-tasks are not receiving any records (idle).
I am expecting that all the 8 sub-tasks will be utilized (I am expecting that each sub-task will be mapped to one partition in my topic).
After restarting the job, I found that some times 3 sub-takes are utilized and some times 4 tasks are utilized while the remaining sub-tasks kept idle.
please your support to clarify this scenario.
Also how can I know that there is a shuffle between sub-takes or not?
My Code:
ConsumerConfigurationData<String> consumerConfigurationData = new ConsumerConfigurationData<>();
Set<String> topicsSet = new HashSet<>();
topicsSet.add("flink-08");
consumerConfigurationData.setTopicNames(topicsSet);
consumerConfigurationData.setSubscriptionName("my-sub0111");
consumerConfigurationData.setSubscriptionType(SubscriptionType.Key_Shared);
consumerConfigurationData.setConsumerName("consumer-01");
consumerConfigurationData.setSubscriptionInitialPosition(SubscriptionInitialPosition.Earliest);
PulsarSourceBuilder<String> builder = PulsarSourceBuilder.builder(new SimpleStringSchema()).pulsarAllConsumerConf(consumerConfigurationData).serviceUrl("pulsar://localhost:6650");
SourceFunction<String> src = builder.build();
DataStream<String> stream = env.addSource(src);
stream.print(" >>> ");
For the Pulsar question, I don't know enough to help. I recommend setting up a larger test and see how that turns out. Usually, you'd have more partitions than slots and have some slots consume several partitions in a somewhat random fashion.
Also how can I know that there is a shuffle between sub-takes or not?
The easiest way is to look at the topology of the Flink Web UI. There you should see the number of tasks and the channel types. You could post a screenshot if you want more details but in this case, there is nothing that will be shuffled, since you only have a source and a sink.
I am currently writing a streaming application where:
as an input, I am receiving some alerts from a kafka topic (1 alert is linked to 1 resource, for example 1 alert will be linked to my-router-1 or to my-switch-1 or to my-VM-1 or my-VM-2 or ...)
I need then to do a query to an external system in order to enrich the alert with some additional information linked to the resource on which the alert is linked
When querying the external system:
I do not want to do 1 query per alert and not even 1 query per resource
I rather want to do group queries (1 query for several alerts linked to several resources)
My idea was to have something like n buffer (n being a small number representing the nb of queries that I will do in parallel), and then for a given time period (let's say 100ms), put all alerts within one of those buffer and at the end of those 100ms, do my n queries in parallel (1 query being responsible for enriching several alerts belonging to several resources).
In Spark, it is something that I would do through a mapPartitions (if I have n partition, then I will do only n queries in parallel to my external system and each query will be for all the alerts received during the micro-batch for one partition).
Now, I am currently looking at Flink and I haven't really found what is the best way of doing such kind of grouping when requesting an external system.
When looking at this kind of use case and especially at asyncio (https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/asyncio.html), it seems that it deals with 1 query per key.
For example, I can very easily:
define the resouce id as a key
define a processing time window of 100ms
and then do my query to the external system (synchronously or maybe better asynchrously through the asyncio feature)
But by doing so, I will do 1 query per resource (maybe for several alerts but linked to the same key, ie the same resource).
It is not what I want to do as it will lead to too much queries to the external system.
I've then explored the option of defining a kind of technical key for my requests (something like the hashCode of my resource id % nb of queries I want to perform).
So, if I want to do max 4 queries in parallel, then my key will be something like "resourceId.hashCode % 4".
I was thinking that it was ok, but when looking more deeply to some metrics when running my job, I found that that my queries were not well distributed to my 4 operator instances (only 2 of them were doing something).
It comes for the mechanism used to assign a key to a given operator instance:
public static int assignKeyToParallelOperator(Object key, int maxParallelism, int parallelism) {
return computeOperatorIndexForKeyGroup(maxParallelism, parallelism, assignToKeyGroup(key, maxParallelism));
}
(in my case, parallelism being 4, maxParallelism 128 and my key value in the range [0,4[ ) (in such a context, 2 of my keys goes to operator instance 3 and 2 to operator instance 4) (operator instance 1 and 2 will have nothing to do).
I was thinking that key=0 will go to operator 0, key 1 to operator 1, key 2 to operator 2 and key 3 to operator 3, but it is not the case.
So do you know what will be the best approach to do this kind of grouping while querying an external system ?
ie 1 query per operator instance for all the alerts "received" by this operator instance during the last 100ms.
You can put an aggregator function upstream of the async function, where that function (using a timed window) outputs a record with <resource id><list of alerts to query>. You'd key the stream by the <resource id> ahead of the aggregator, which should then get pipelined to the async function.
i am trying to sum network traffic in/out from different IDC, also using snmp_export to get those information, but sometimes the snmp export can't get some switch's infomations ,maybe timeout or lost. so there is no date update for this switch and "/metric" will only show parts traffic info. The problem is when i using
sum(irate(ifInOctets{ifIndex=...,instance=...})) +
sum(irate(ifInOctets{ifIndex=...,instance=...}))+
sum(irate(ifInOctets{ifIndex=...,instance=...}))
to get all traffic total value , the expr will return no data and break the graph.
I am newbie to prometheus. not sure if the using method is wrong .
Thanks
The way to approach this is to use rate() with a long enough range to tolerate a failed scrape. For example if you are scraping once a minute then 5m is enough so you would use sum without(instance) (rate(ifInOctects[5m]))
I am using Javamail library to fetch emails from several servers through IMAP.
I only care for the unread messages and I want to download only the 5 last received unread messages.
For filtering the messages in a folder I am using using the Folder.search(FlagTerm ft) method passing the flag SEEN with value false, just as following code shows:
FlagTerm ft = new FlagTerm(new Flags(Flags.Flag.SEEN), false);
Message[] messages = folder.search(ft);
I need to diminish bandwidth usage and the above method may return an arbitrarily large number of messages. I am only interested in the last 5 of them, is there a way for the IMAP server return a limited number of messages?
You can do the search over a subset of the messages, effectively setting an upper limit on the number of messages returned, but you might have to do multiple searches. There's no direct way to limit the number of results returned if you're searching all messages.
Note that the search results are relatively compact (effectively only the message number), so unless you're searching a huge number of messages I wouldn't think bandwidth would be an issue relative to fetching the content of the messages.