Read Flink latency tracking metric in Datadog - apache-flink

I'm following this doc https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/metrics/#end-to-end-latency-tracking and enabled metrics.latency.interval in flink-conf.yaml as shown below:
metrics.latency.interval: 60000
metrics.latency.granularity: operator
Now, I have the following questions:
how could I know what kind of metrics(a list of metrics name) are enabled? I didn't find any in metrics UI.
Datadog is my reporter, will the latency metrics send to Datadog just like other system metrics listed here https://docs.datadoghq.com/integrations/flink/#data-collected? If yes, what's their name? If no, is there anything I need to do to get them in Datadog?
I'm new to the Flink and the Datadog.Many thanks!

You can access these metrics via rest api integration:
http://{job_manager_address}:8081/jobs/{job_id}/metrics
which will return:
[
{
"id": "latency.source_id.3d28eee20f19966ad0843c8183e96045.operator_id.9c9bbdbebfd61a4aaac39e2c417a4f21.operator_subtask_index.7.latency_min"
},
{
"id": "latency.source_id.bca0e5ddee87a6f64a26077804c63e69.operator_id.197249262ed30764bb323b65405e10b4.operator_subtask_index.14.latency_p75"
},
{
"id": "latency.source_id.bca0e5ddee87a6f64a26077804c63e69.operator_id.b9d4ed4c91fec482427d3584100b1c90.operator_subtask_index.12.latency_median"
},
]
This means that latency from the source_id 3d28eee20... to operator_id 9c9bbdbe with subtask index 7.
However I don't know exact meaning of the latency_p75 or latency_min. Maybe someone else can help us both.

#monstero has explained where to find the latency metrics -- they are job metrics.
The latency metrics are histogram metrics. latency_p75, for example, is the 75th percentile latency, meaning that 75% of the time the latency was less than the reported value.
In all, you can access the min, max, mean, median, stddev, p75, p90, p95, p98, p99, and p999.

Related

get the top 10 javascript/opensource repositories ranked by star using GitHub GraphQL Api

I would like to get the top 10 javascript/opensource repositories ranked by star (and some related informations) using GitHub GraphQL Api in a python project. I have this query so far:
query{
search(type: REPOSITORY, query: "language:javascript", first:10) {
userCount
edges {
node {
... on Repository {
name
url
stargazers {
totalCount
}
owner{
login
}
}
}
}
}
}
The problem is that it does not always return the same result: it will return 10 random repositories ordered by starcount at each query rather than the absolute top 10.
And on top of that I’d like to get the ones that are open source.
I use the query
query{
licenses{name}
}
to get a list of licences but I don’t know if this is an exhaustive list (seems like it's missing some licenses like MIT). According to the doc it is
Return a list of known open source licenses.
How to get an exhaustive lists of the licences and add it to my main query above to make my research more precise?
I can't seem to find clear answers as the documentation about the GraphQl api for GitHub is scarce and quite vague.
Thanks
I got an partial explanation from GitHub Support about the reason of why the results are inconsistent: it's due to the fact that there is a timeout when queries run for too long.
Some queries are computationally expensive for our search infrastructure to execute. To keep search fast for everyone, we limit how long any individual query can run. In rare situations when a query exceeds the time limit, search returns all matches that were found prior to the timeout and informs you that a timeout occurred.
Reaching a timeout does not necessarily mean that search results are incomplete. It just means that the query was discontinued before it searched through all possible data.
Our team wrote about this here:
https://help.github.com/articles/troubleshooting-search-queries/#potential-timeouts
Given this reality, these timeouts may cause inconsistencies while paging through the results. We see how this could be improved in future iterations of search, so we've let our team know so they're aware though we can't make any promises on specific changes.
Edit: Provided by the support, adding query: "language:javascript stars:>1600" (1600 is more or less the minimum star count of the top 3000 reps but need to be big enough to narrow the search) will provide consistently the top 10 repos ordered by star.

How to I relate the metrics in Datadog with execution plan operators in Flink?

In my case scenario, Flink is sending the metrics to Datadog. Datadog Host map is as shown below { I have no Idea why is showing me latency here }
Flink metrics are sent to localhost. The issue here is that when
flink-conf.yaml file configuration is as follows
# adding metrics
metrics.reporters: stsd , dghttp
metrics.reporter.stsd.class: org.apache.flink.metrics.statsd.StatsDReporter
metrics.reporter.stsd.host: localhost
metrics.reporter.stsd.port: 8125
# for datadog
metrics.reporter.dghttp.class: org.apache.flink.metrics.datadog.DatadogHttpReporter
metrics.reporter.dghttp.apikey: xxx
metrics.reporter.dghttp.tags: host:localhost, job_id : jobA , tm_id : task1 , operator_name : operator1
metrics.scope.operator: numRecordsIn
metrics.scope.operator : numRecordsInPerSecond
metrics.scope.operator : numRecordsOut
metrics.scope.operator : numRecordsOutPerSecond
metrics.scope.operator : latency
The issue is that Datadog is showing 163 metrics which I don't understand, which I will explain in a while
I don't understand the metrics format in datadog as it shows me metrics something like this
Now as shown in above Image
Latency is expressed in time
Number of events per second is event /sec
count is some value
So my question is that which metric is this?
Also, the execution plan of my job is something like this
How do I relate the metrics in Datadog with execution plan operators in Flink?
I have read in Flink API 1.3.2 that I can use tags, I have tried to use them in flink-conf.yaml file but I don't have complete Idea what sense they make here.
My ultimate goal is to find operator latency, number of records out and in /second at each operator in this case
There are a variety of issues here.
1. You've misconfigured the scope formats. (metrics.scope.operator)
For one the configuration doesn't make sense since you specify "metrics.scope.operator" multiple times; only the last config entry is honored.
Second, and more importantly, you have misunderstood for scope formats are used for.
Scope formats configure which context information (like the ID of the task) is included in the reported metric's name.
By setting it to a constant ("latency") you've told Flink to not include anything. As a result, the numRecordsIn metrics for every operator is reported as "latency.numRecordsIn".
I suggest to just remove your scope configuration.
2. You've misconfigured the Datadog Tags
I do not understand what you were trying to do with your tags configuration.
The tags configuration option can only be used to provide global tags, i.e. tags that are attached to every single metrics, like "Flink".
By default every metric that the Datadog reports has tags attached to it for every available scope variable available.
So, if you have an operator name A, then the numRecordsIn metric will be reported with a tag "operator_name:A".
Again, I would suggest to just remove your configuration.

How to make aggregations fast on Vespa?

We have 60M documents in an index. hosted on 4 nodes cluster.
I want to make sure the configuration is optimised for aggregations on the documents.
This is the sample query:
select * from sources * where (sddocname contains ([{"implicitTransforms": false}]"tweet")) | all(group(n_tA_c) each(output(count() as(count))));
The field n_tA_c contains array of strings. This is the sample document:
{
"fields": {
"add_gsOrd": 63829,
"documentid": "id:firehose:tweet::815347045032742912",
"foC": 467,
"frC": 315,
"g": 0,
"ln": "en",
"m": "ya just wants some fried rice",
"mTp": 2,
"n_c_p": [],
"n_tA_c": [
"fried",
"rice"
],
"n_tA_s": [],
"n_tA_tC": [],
"sN": "long_delaney1",
"sT_dlC": 0,
"sT_fC": 0,
"sT_lAT": 0,
"sT_qC": 0,
"sT_r": 0.0,
"sT_rC": 467,
"sT_rpC": 0,
"sT_rtC": 0,
"sT_vC": 0,
"sddocname": "tweet",
"t": 1483228858608,
"u": 377606303,
"v": "false"
},
"id": "id:firehose:tweet::815347045032742912",
"relevance": 0.0,
"source": "content-root-cluster"
}
The n_tA_c is attribute with mode fast-search
field n_tA_c type array<string> {
indexing: summary | attribute
attribute: fast-search
}
The simple term aggregation query does not come back in 20s. And times-out. What are additional check-list we need to ensure to reduce this latency?
$ curl 'http://localhost:8080/search/?yql=select%20*%20from%20sources%20*%20where%20(sddocname%20contains%20(%5B%7B%22implicitTransforms%22%3A%20false%7D%5D%22tweet%22))%20%7C%20all(group(n_tA_c)%20each(output(count()%20as(count))))%3B' | python -m json.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 270 100 270 0 0 13 0 0:00:20 0:00:20 --:--:-- 67
{
"root": {
"children": [
{
"continuation": {
"this": ""
},
"id": "group:root:0",
"relevance": 1.0
}
],
"errors": [
{
"code": 12,
"message": "Timeout while waiting for sc0.num0",
"source": "content-root-cluster",
"summary": "Timed out"
}
],
"fields": {
"totalCount": 0
},
"id": "toplevel",
"relevance": 1.0
}
}
These nodes are aws i3.4x large boxes.(16 cores, 120 GB)
I might me missing something silly.
You are asking for every unique value and their count() as your grouping expression does not contain any max(x) limitation, this is a very cpu and network intensive task to compute and limiting number of groups is much faster by e.g
all(group(n_tA_c) max(10) each(output(count() as(count))));
General comments:
With vespa like any other serving engine it's important to have enough memory and e.g swap disabled so you can index and search data without getting into high memory pressure.
How much memory you'll use per document type is dependent on several factors but how many fields defined with attribute and number of documents per node is important. Redundancy and number of searchable copies also plays a major role.
Grouping over the entire corpus is memory intensive (memory bandwidth reading attribute values), cpu intensive and also network intensive when there is a high fan-out (See more on the precision here http://docs.vespa.ai/documentation/grouping.html which can limit number of groups returned per node).
Summarising the checkpoints to take care while making aggregations from the conversation in other answer and more documentation help.
Always add max(x) in the group for size of buckets needed. When data is distributed across multiple content nodes this result can be inaccurate. To increase accuracy we need to use precision(x) as well to tune accuracy as we need.
If you only need aggregation buckets and no hits - pass limit 0 in the yql; this will save the step to load summary to be returned for container.
The attribute fields we are filtering/aggregating to be on mode fast-search; otherwise it is not B-tree like index - and has to be traversed.
Ensure constant score for docs with &ranking=unranked in the query.
Enable groupingSessionCache: http://docs.vespa.ai/documentation/reference/search-api-reference.html#groupingSessionCache
Sizing the content node for tradeoffs of latency vs no. of docs. by max-hits as described: http://docs.vespa.ai/documentation/performance/sizing-search.html
If memory is the bottleneck one can look at attribute flush strategy configuration. http://docs.vespa.ai/documentation/proton.html#proton-maintenance-jobs
If CPU is the bottleneck; increase parallelism. Ensure all cores are used in Searcher. http://docs.vespa.ai/documentation/content/setup-proton-tuning.html#requestthreads-persearch. Changes for that in service.xml:
<persearch>16</persearch>
Threads persearch is by default 1.
Above changes, ensured that query is returned with result before timeout. But learned that Vespa is not made for aggregations with primary goal. The latency for write and search are much less than ES with same scale on identical hardware. But aggregation (specially with multi-valued string fields) is more CPU intensive and more latency compare to ES for the same aggregation query.

Continuous queries in Influxdb ignoring where clause?

I'm having a bit of a trouble with the continuous queries in influxdb 0.8.8.
I'm trying to create a continuous query but it seems that the where clauses are ignored. I'm aware about the restrictions mentioned here: http://influxdb.com/docs/v0.8/api/continuous_queries.html but I don't consider that this would be the case here.
One row in the time series would contain data like this:
{"hex":"06a0b6", "squawk":"3421", "flight":"QTR028 ", "lat":99.867630, "lon":66.447365, "validposition":1, "altitude":39000, "vert_rate":-64,"track":125, "validtrack":1,"speed":482, "messages":201, "seen":219}
The query I'm running and works is the following:
select * from flight_series where time > now() - 30m and flight !~ /^$/ and validtrack = 1 and validposition = 1;
Trough it I'm trying to take the last 30 minutes from the current time, check that the flight field is no whitespaces and that the track/position are valid.
The query returns successfully but when I'm adding the
into filtered_log
part the 'where' clause is ignored.
How can I create a continuous query which takes the above-mentioned conditions into consideration? At least, how could I extract with one continuous query only the rows which have the valid track/heading set to 1 and the flight is not whitespace/empty string? The time constraint I could eliminate from the query and translate into shard retention/duration.
Also, could I specify to in the continuous query to save the data into a time-series which is located into another database (which has a more relaxed retention/duration policy)?
Thank you!
Later edit:
I've managed to do something closer to my need by using the following cq:
"select time, sequence_number, altitude, vert_rate, messages, squawk, lon, lat, speed, hex, seen from current_flights where ((flight !~ /^$/) AND (validtrack = 1)) AND (validposition = 1) into flight.[flight]"
This creates a series for each 'flight' even for those which have a whitespace in the 'flight' field -- for which a flight. series is built.
How could I specify the retention/duration policies for the series generated by the cq above? Can I do something like:
"spaces": [
{
"name": "flight",
"retentionPolicy": "1h",
"shardDuration": "30m",
"regex": "/.*/",
"replicationFactor": 1,
"split": 1
},
...
which would give me a retention of 1h and shard duration of 30m?
I'm a bit confused about where those series are stored, which shard space?
Thanks!
P.S.: My final goal would be the following:
Have a 'window' of 15-30min max with all the flights around, process some data from them and after that period is over discard the data but in the same time move/copy it to another long-term db/series which can be used for historical purposes.
You cannot put time restrictions into the WHERE clause of a continuous query. The server will generate the time restrictions as needed when the CQ runs and must ignore all others. I suspect if you leave out the time restriction the rest of the WHERE clause will be fine.
I don't believe CQs in 0.8 require an aggregation in the SELECT, but you do need to have GROUP BY clause to tell the CQ how often to run. I'm not sure what you would GROUP BY, perhaps the flight?
You can specify a different retention policy when writing to the new series but not a new database. In 0.8 the retention policy for a series is determined by regex matching on the series name. As long as you select a series name correctly it will go into your desired retention policy.
EDIT: updates for new questions
How could I specify the retention/duration policies for the series
generated by the cq above?
In 0.8.x, the shard space to which a series belongs controls the retention policy. The regex on the shard space determines which series belong to that shard. The shard space regex is evaluated newest to oldest, meaning the first created shard space will be the last regex evaluated. Unfortunately, I do know if it is possible to create new shard spaces once the database exists. See this discussion on the mailing list for more: https://groups.google.com/d/msgid/influxdb/ce3fc641-fbf2-4b39-9ce7-77e65c67ea24%40googlegroups.com
Can I do something like:
"spaces": [
{
"name": "flight",
"retentionPolicy": "1h",
"shardDuration": "30m",
"regex": "/.*/",
"replicationFactor": 1,
"split": 1
}, ... which would give me a retention of 1h and shard duration of 30m?
That shard space would have a shard duration of 30 minutes, retaining data for 1 hour, meaning any series would only exist in three shards, the current hot shard, the current cold shard, and the shard waiting for deletion.
The regex is /./, meaning it would match any series, not just the 'flight.' series. Perhaps /flight../ is a better regex if you only want those series generated by the CQ in that shard space.

How to debug long-running mapreduce jobs with millions of writes?

I am using the simple control.start_map() function of the appengine-mapreduce library to start a mapreduce job. This job successfully completes and shows ~43M mapper-calls on the resulting /mapreduce/detail?mapreduce_id=<my_id> page. However, this page makes no mention of the reduce step or any of the underlying appengine-pipeline processes that I believe are still running. Is there some way to return the pipeline ID that this calls makes so I can look at the underlying pipelines to help debug this long-running job? I would like to retrieve enough information to pull up this page: /mapreduce/pipeline/status?root=<guid>
Here is an example of the code I am using to start up my mapreduce job originally:
from third_party.mapreduce import control
mapreduce_id = control.start_map(
name="Backfill",
handler_spec="mark_tos_accepted",
reader_spec=(
"third_party.mapreduce.input_readers.DatastoreInputReader"),
mapper_parameters={
"input_reader": {
"entity_kind": "ModelX"
},
},
shard_count=64,
queue_name="backfill-mapreduce-queue",
)
Here is the mapping function:
# This is where we keep our copy of appengine-mapreduce
from third_party.mapreduce import operation as op
def mark_tos_accepted(modelx):
# Skip users who have already been marked
if (not modelx
or modelx.tos_accepted == myglobals.LAST_MATERIAL_CHANGE_TO_TOS):
return
modelx.tos_accepted = user_models.LAST_MATERIAL_CHANGE_TO_TOS
yield op.db.Put(modelx)
Here are the relevant portions of the ModelX:
class BackupModel(db.Model):
backup_timestamp = db.DateTimeProperty(indexed=True, auto_now=True)
class ModelX(BackupModel):
tos_accepted = db.IntegerProperty(indexed=False, default=0)
For more context, I am trying to debug a problem I am seeing with writes showing up in our data warehouse.
On 3/23/2013, we launched a MapReduce job (let's call it A) over a db.Model (let's call it ModelX) with ~43M entities. 7 hours later, the job "finished" and the /mapreduce/detail page showed that we had successfully mapped over all of the entities, as shown below.
mapper-calls: 43613334 (1747.47/sec avg.)
On 3/31/2013, we launched another MapReduce job (let's call it B) over ModelX. 12 hours later, the job finished with status Success and the /mapreduce/detail page showed that we had successfully mapped over all of the entities, as shown below.
mapper-calls: 43803632 (964.24/sec avg.)
I know that MR job A wrote to all ModelX entities, since we introduced a new property that none of the entities contained before. The ModelX contains an auto_add property like so.
backup_timestamp = ndb.DateTimeProperty(indexed=True, auto_now=True)
Our data warehousing process runs a query over ModelX to find those entities that changed on a certain day and then downloads those entities and stores them in a separate (AWS) database so that we can run analysis over them. An example of this query is:
db.GqlQuery('select * from ModelX where backup_timestamp >= DATETIME(2013, 4, 10, 0, 0, 0) and backup_timestamp < DATETIME(2013, 4, 11, 0, 0, 0) order by backup_timestamp')
I would expect that our data warehouse would have ~43M entities on each of the days that the MR jobs completed, but it is actually more like ~3M, with each subsequent day showing an increase, as shown in this progression:
3/16/13 230751
3/17/13 193316
3/18/13 344114
3/19/13 437790
3/20/13 443850
3/21/13 640560
3/22/13 612143
3/23/13 547817
3/24/13 2317784 // Why isn't this ~43M ?
3/25/13 3701792 // Why didn't this go down to ~500K again?
3/26/13 4166678
3/27/13 3513732
3/28/13 3652571
This makes me think that although the op.db.Put() calls issued by the mapreduce job are still running in some pipeline or queue and causing this trickle effect.
Furthermore, if I query for entities with an old backup_timestamp, I can go back pretty far and still get plenty of entities, but I would expect all of these queries to return 0:
In [4]: ModelX.all().filter('backup_timestamp <', 'DATETIME(2013,2,23,1,1,1)').count()
Out[4]: 1000L
In [5]: ModelX.all().filter('backup_timestamp <', 'DATETIME(2013,1,23,1,1,1)').count()
Out[5]: 1000L
In [6]: ModelX.all().filter('backup_timestamp <', 'DATETIME(2012,1,23,1,1,1)').count()
Out[6]: 1000L
However, there is this strange behavior where the query returns entities that it should not:
In [8]: old = ModelX.all().filter('backup_timestamp <', 'DATETIME(2012,1,1,1,1,1)')
In [9]: paste
for o in old[1:100]:
print o.backup_timestamp
## -- End pasted text --
2013-03-22 22:56:03.877840
2013-03-22 22:56:18.149020
2013-03-22 22:56:19.288400
2013-03-22 22:56:31.412290
2013-03-22 22:58:37.710790
2013-03-22 22:59:14.144200
2013-03-22 22:59:41.396550
2013-03-22 22:59:46.482890
2013-03-22 22:59:46.703210
2013-03-22 22:59:57.525220
2013-03-22 23:00:03.864200
2013-03-22 23:00:18.040840
2013-03-22 23:00:39.636020
Which makes me think that the index is just taking a long time to be updated.
I have also graphed the number of entities that our data warehousing downloads and am noticing some cliff-like drops that makes me think that there is some behind-the-scenes throttling going on somewhere that I cannot see with any of the diagnostic tools exposed on the appengine dashboard. For example, this graph shows a fairly large spike on 3/23, when we started the mapreduce job, but then a dramatic fall shortly thereafter.
This graph shows the count of entities returned by the BackupTimestamp GqlQuery for each 10-minute interval for each day. Note that the purple line shows a huge spike as the MapReduce job spins up, and then a dramatic fall ~1hr later as the throttling kicks in. This graph also shows that there seems to be some time-based throttling going on.
I don't think you'll have any reducer functions there, because all you've done is start a mapper. To do a complete mapreduce, you have to explicitly instantiate a MapReducePipeline and call start on it. As a bonus, that answers your question, as it returns the pipeline ID which you can then use in the status URL.
Just trying to understand the specific problem. Is it that you are expecting a bigger number of entities in your AWS database? I would suspect that the problem lies with the process that downloads your old ModelX entities into an AWS database, that it's somehow not catching all the updated entities.
Is the AWS-downloading process modifying ModelX in any way? If not, then why would you be surprised at finding entities with an old modified time stamp? modified would only be updated on writes, not on read operations.
Kind of unrelated - with respect to throttling I've usually found a throttled task queue to be the problem, so maybe check how old your tasks in there are or if your app is being throttled due to a large amount of errors incurred somewhere else.
control.start_map doesn't use pipeline and has no shuffle/reduce step. When the mapreduce status page shows its finished, all mapreduce related taskqueue tasks should have finished. You can examine your queue or even pause it.
I suspect there are problems related to old indexes for the old Model or to eventual consistency. To debug MR, it is useful to filter your warnings/errors log and search by the mr id. To help with your particular case, it might be useful to see your Map handler.

Resources