What is the result of setting the 'creationTime' metadata field of a transaction? - pact-lang

The pact-lang-api library accepts metadata for transactions that it formats to be sent off to a node. One of the metadata fields is creationTime. The closest description of this field I can find is in the YAML section of the Pact docs, which states this metadata field denotes:
optional integer tx execution time after offset
In the pact-lang-api code I've seen in the wild this value is usually set to the current time or a delayed time (for example, in the deploy-contract section of the pact-lang-api cookbook):
Math.round(new Date().getTime() / 1000) - 15
This indicates the current time in seconds from the Unix epoch, minus 15 seconds. In other words, the "optional integer tx execution time after offset" is fifteen seconds before this transaction is sent to a node.
What does this mean? Specifically, I'm trying to understand what the effect of setting this metadata field is, and what might happen if I set it to different values (what if I set it to 15 seconds after the transaction was constructed, instead of 15 seconds before the transaction as the cookbook does?).

Related

Apache Flink Is Windowing dependent on Timestamp assignment of EventTime Events

I am new to apache flink and am trying to understand how the concept of EventTime and Windowing is handled by flink.
So here's my scenario :
I have a program that runs as a thread and creates a files with 3
fields every second of which the 3rd field is the timestamp.
There is a little tweak though every 5 seconds I enter an older timestamp (t-5 you could say) into the new file created.
Now I run the stream processing job which reads the 3 fields above
into a tuple.
Now I have defined the following code for watermarking and timestamp generation:
WatermarkStrategy
.<Tuple3<String, Integer, Long>>forBoundedOutOfOrderness(Duration.ofSeconds(4))
.withTimestampAssigner((event, timestamp) -> event.f2);
And then I use the following code for windowing the above and trying to get the aggregation :
withTimestampsAndWatermarks
.keyBy(0)
.window(TumblingEventTimeWindows.of(Time.milliseconds(4000)))
.reduce((x,y) -> new Tuple3<String, Integer, Long>(x.f0, x.f1 + y.f1,y.f2))
It is clear that I am trying to aggregate the numbers within each field.(a little more context, the field(f2) that I am trying to aggregate are all 1s)
Hence I have the following questions :
That is the window is just 4 seconds wide, and every fifth entry is
an older timestamp, so I am expecting that the next window to have
lesser counts. Is my understanding wrong here ?
If my understanding is right - I do not see any aggregation when running both programs in parallel, Is there something wrong with my code ?
Another one that is bothering me is on what fields or on what parameters do the windows start time and end time really dependent ? Is it on the timestamp extracted from Events or is it from processing time
You have to configure the allowed lateness: https://nightlies.apache.org/flink/flink-docs-release-1.2/dev/windows.html#allowed-lateness. If not configured, Flink will drop the late message. So for the next window, there will be less elements than previous window.
Window is assigned by the following calculation:
return timestamp - (timestamp - offset + windowSize) % windowSize
In your case, offset is 0(default). For event time window, the timestamp is the event time. For processing time window, the timestamp is the processing time from Flink operator. E.g. if windowSize=3, timestamp=122, then the element will be assigned to the window [120, 123).

Show second to last item in Influx query (or ignore last)

I am using Grafana to show the number of entries added to the database every minute, and I would like to display the last recent fully counted value.
If I give the following command:
SELECT count("value") FROM "numSv" GROUP BY time(1m)
1615904700000000000 60
1615904760000000000 60
1615904820000000000 60
1615904880000000000 60
1615904940000000000 36
Grafana is going to display the last entry, which is still in the process of counting. How can I display the n[-1] entry, which has been fully counted?
Otherwise, how do I ask Influx to give me the same results excluding the last dataset?
P.S.: Using WHERE time > now() - 60s, etc... doesn't work.
Use "magic" Grafana time range math and select dashboard time range from now-1m/m to now-1m/m. That generates an absolute time range, which refers to last fully counted minute. Query is then standard with $timeFilter Grafana macro:
SELECT count("value") FROM "numSv" WHERE $timeFilter

I don't understand how some parameters of Query Store in MS SQL Server works

According to official documentation: In sys.database_query_store_options we have options which can adjust Query Store workflow and performance.
From documentation:
"flush_interval_seconds - The period for regular flushing of Query Store data to disk in seconds. Default value is 900 (15 min)"
"interval_length_minutes - The statistics aggregation interval in minutes. Arbitrary values are not allowed. Use one of the following: 1, 5, 10, 15, 30, 60, and 1440 minutes. The default value is 60 minutes."
And now i have a problem:
If Query Store flush data to disk every 15min, why do i see query in QS tables in seconds after execution?
As i understand QS tables are 'permanent' and they are stored in data base (on disk), so how does this parameter (flush_interval_seconds) work?
The same thing about interval_length_minute - when i saved QS output after 1 minute from last query execution and after 61 minutes i realised that they are more less the same, so what about this aggregation?
flush_interval_seconds - The period for regular flushing of Query Store data to disk in seconds. That means flushing from memory to disk so that the information wouldn't be lost after server restart. Before the flushing you just read info from memory.
interval_length_minute - this is aggregation interval for query runtime statistics. The lower it is the finer granularity of the runtime statistics becomes.
None of the options sets a period after which the info will be available.

Newrelic custom plugin metrics

I'm working directly with the HTTP API and trying to get some metrics from our storage.
The doc states "Tip: If you want the metric to appear as a percentage in the user interface, then you must define it as a percentage in the JSON."
However - I can't send metric values which are percentages; the POST response has status 400 with body
{"error":"Unable to parse request: null"}
My POST is
{"components": [
{"duration": 1,
"guid": "com.cumulus.Test5",
"name":"ServerX",
"metrics": {
"Component/Filesystem/root/Percentage Used": "62%"
}
}],
"agent": {"host": "vss-syd", "version": "1.0.0", "pid": 1080}
}
Also - I have a metric "Number of devices offline" (for a ZFS storage pool) which is discrete i.e. not continuous - so averages don't make sense, just absolute values.
For which I'd like to set an alert if it gets above 0.
I know the threshold is only 'greater than', so I can set thresholds # 0.1 Alert & 0.2 Critical no prob.
However - please can someone point me in the right direction as to how I should
Send such a metric (i.e. need to specify [units] and aggregates?)
Create the Summary Metric + Graphs in the frontend? (which 'Value' to select e.g. 'Calls per minute')
There are two issues that look like they could be the cause.
The first is that the duration should be 60, which represents the number of seconds for which the reported metrics correspond. NewRelic is optimized to work with this particular interval and while you can have larger values (300 seconds is the recommended maximum), the minimum required value is 60. Smaller values may be accepted by the API, but the results will be unpredictable.
The second is that the percentage used is a string value which should instead be reported as an integer value, such as 62, or a float value of 62.0 if you wish to preserve that level of precision.
Regarding the second portion of your question about reporting and displaying a metric related to "# of Failing Disks":
New Relic does not currently support reporting metrics that represent absolute values. All metric values are presented in aggregate over some particular time period. Summary Metrics are aggregated over the most recent ~4 minutes, while metrics on charts and tables are aggregated over the time period selected in the time picker.
That said, you could try something along the lines of "percentage of failing disks" where perhaps an average might still be useful in that any non-zero value indicates a failure.
This average would be of questionable value once the aggregation time period became larger than a few minutes. However, given that summary metrics are always aggregated over a fixed time period of ~4 minutes — and it is summary metrics that trigger alerts — this may still be useful to you.

What is the maximum length in seconds to store a value in memcache

The Google App Engine memcache documentation states that the time parameter of memcache.set() is an "Optional expiration time, either relative number of seconds from current time (up to 1 month), or an absolute Unix epoch time."
So I tried to set a value for 30 days, which according to Google is 2 592 000 seconds.
However, I highly suspect that this value is too high, because the value was set (memcache.set() returned the value True), but a memcache.get() just after always returned None. Reducing this value to 1 728 000 seconds just worked fine/as expected.
I guess that once passed the highest value, the time parameter gets interpreted as an absolute Unix epoch time. That would mean that 2 592 000 seconds got interpreted as "Sat, 31 Jan 1970 00:00:00 GMT", which is obviously a date in the past...
So what is the highest value you can enter that will get interpreted as a number of seconds in the future?
Edit: On the local dev server, 2 592 000 second worked OK, but not on the production servers. I suppose both servers have a different interpretation of the values.
Your linked Google documentation is oddly imprecise; the actual memcached documentation is more specific, saying the number may not exceed 2,592,000 (30 days of seconds). So in theory, that should have worked, barring implementation issues. (That statement is echoed in the PHP documentation for its memcache stuff.) So according to the memcached docs, your first value should have worked.
I don't suppose 2,591,999 works? The Google doc does say "up to one month", which if you assume 30 days in a month (not a valid assumption) would be up to 2,592,000 (e.g., but not including). That's at odds with the memcached docs, but perhaps there's an implementation difference or something.

Resources