I am trying to set a datetime X axis for my Shield UI Chart, but am some confused about the data step. I need to have a one day increments, but I see:
dataStep: 24 * 3600 * 1000
I assume, that 24*3600 are the seconds for the day, but what is the 1000 multiplier used for?
You are correct, but are forgetting, that the chart is taking into account milliseconds, instead of seconds. Therefore one day is not only 24*3600 seconds, but 24*3600*1000 milliseconds. This will give you one day increment on your X axis.
Related
I'm looking at various libraries to see how I can plot the total return of a stock within the time period specified. The algorithm is simply (data - dataMin / dataMax ) * 100 for the % return across the time period. I'm currently using recharts but it seems like there's a limitation in their library as I cannot alter the Y Axis to plot that number AND the price at the time the mouse is hovering over.
Thanks!
Edit:
I have something like this at the moment but as you can see, the Y Axis scaling is capped at ~1000% whereas the max should be ~2800%. https://codesandbox.io/s/recharts-issue-template-czf5t
Given the variable 'points' which increases every time a variable 'player' collects a point, how do I logically find a way to reward user for finding 30 points inside a 5 minutes limit? There's no countdown timer.
e.g player may have 4 points but in 5 minutes if he has 34 points that also counts.
I was thinking about using timestamps but I don't really know how to do that.
What you are talking about is a "sliding window". Your window is time based. Record each point's timestamp and slide your window over these timestamps. You will need to pick a time increment to slide your window.
Upon each "slide", count your points. When you get the amount you need, "reward your user". The "upon each slide" means you need some sort of timer that calls a function each time to evaluate the result and do what you want.
For example, set a window for 5 minutes and a slide of 1 second. Don't keep a single variable called points. Instead, simply create an array of timestamps. Every timer tick (of 1 second in this case), count the number of timestamps that match t - 5 minutes to t now; if there are 30 or more, you've met your threshold and can reward your super-fast user. If you need the actual value, that may be 34, well, you've just computed it, so you can use it.
There may be ways to optimize this. I've provided the naive approach. Timestamps that have gone out of range can be deleted to save space.
If there are "points going into the window" that count, then just add them to the sum.
I have a setup with a Beaglebone Black which communicates over I²C with his slaves every second and reads data from them. Sometimes the I²C readout fails though, and I want to get statistics about these fails.
I would like to implement an algorithm which displays the percentage of successful communications of the last 5 minutes (up to 24 hours) and updates that value constantly. If I would implement that 'normally' with an array where I store success/no success of every second, that would mean a lot of wasted RAM/CPU load for a minor feature (especially if I would like to see the statistics of the last 24 hours).
Does someone know a good way to do that, or can anyone point me in the right direction?
Why don't you just implement a low-pass filter? For every successfull transfer, you push in a 1, for every failed one a 0; the result is a number between 0 and 1. Assuming that your transfers happen periodically, this works well -- and you just have to adjust the cutoff frequency of that filter to your desired "averaging duration".
However, I can't follow your RAM argument: assuming you store one byte representing success or failure per transfer, which you say happens every second, you end up with 86400B per day -- 85KB/day is really negligible.
EDIT Cutoff frequency is something from signal theory and describes the highest or lowest frequency that passes a low or high pass filter.
Implementing a low-pass filter is trivial; something like (pseudocode):
new_val = 1 //init with no failed transfers
alpha = 0.001
while(true):
old_val=new_val
success=do_transfer_and_return_1_on_success_or_0_on_failure()
new_val = alpha * success + (1-alpha) * old_val
That's a single-tap IIR (infinite impulse response) filter; single tap because there's only one alpha and thus, only one number that is stored as state.
EDIT2: the value of alpha defines the behaviour of this filter.
EDIT3: you can use a filter design tool to give you the right alpha; just set your low pass filter's cutoff frequency to something like 0.5/integrationLengthInSamples, select an order of 0 for the IIR and use an elliptic design method (most tools default to butterworth, but 0 order butterworths don't do a thing).
I'd use scipy and convert the resulting (b,a) tuple (a will be 1, here) to the correct form for this feedback form.
UPDATE In light of the comment by the OP 'determine a trend of which devices are failing' I would recommend the geometric average that Marcus Müller ꕺꕺ put forward.
ACCURATE METHOD
The method below is aimed at obtaining 'well defined' statistics for performance over time that are also useful for 'after the fact' analysis.
Notice that geometric average has a 'look back' over recent messages rather than fixed time period.
Maintain a rolling array of 24*60/5 = 288 'prior success rates' (SR[i] with i=-1, -2,...,-288) each representing a 5 minute interval in the preceding 24 hours.
That will consume about 2.5K if the elements are 64-bit doubles.
To 'effect' constant updating use an Estimated 'Current' Success Rate as follows:
ECSR = (t*S/M+(300-t)*SR[-1])/300
Where S and M are the count of errors and messages in the current (partially complete period. SR[-1] is the previous (now complete) bucket.
t is the number of seconds expired of the current bucket.
NB: When you start up you need to use 300*S/M/t.
In essence the approximation assumes the error rate was steady over the preceding 5 - 10 minutes.
To 'effect' a 24 hour look back you can either 'shuffle' the data down (by copy or memcpy()) at the end of each 5 minute interval or implement a 'circular array by keeping track of the current bucket index'.
NB: For many management/diagnostic purposes intervals of 15 minutes are often entirely adequate. You might want to make the 'grain' configurable.
I want to confirm the "exptime" unit in "lcb_store_cmd_t".
I can NOT get the definite unit information from lcb_store api html(here: http://www.couchbase.com/autodocs/couchbase-c-client-2.1.3/lcb_store.3couchbase.html).
Although I have read the expiration explanation in the devguide (here: http://docs.couchbase.com/couchbase-devguide-2.1/#about-document-expiration), I can NOT confirm that the exptime unit is seconds.
I want to set the expiry time as two days (172800seconds), so I assign the exptime param with 172800, then call lcb_store. Is this OK?
It is seconds. Personally I think the dev guide is quite explicit on the issue, but perhaps part of the ambiguity is that values greater than 30 x 24 x 60 x 60 are treated as absolute unix timestamps, and values less seconds relative from current time.
The Python SDK reference explicitly mentions the unit is seconds.
I working on a horse racing application and have the need to store elapsed times from races in a table. I will be importing data from a comma delimited file that provides the final time in one format and the interior elapsed times in another. The following is an example:
Final Time: 109.39 (1 minute, 9 seconds and 39/100th seconds)
Quarter Time: 2260 (21 seconds and 60/100th seconds)
Half Time: 4524 (45 seconds and 24/100th seconds)
Three Quarters: 5993 (59 seconds and 93/100th seconds)
I'll want to have the flexibility to easily do things like feet per seconds calculations and to convert elapsed times to splits. I'll also want to be able to easily display the times (elapsed or splits) in fifth of seconds or in hundredths.
Times in fifths: :223 :451 :564 1:091 (note the last digits are superscripts)
Times in hundredths: 22.60 :45.24 :56.93 1:09.39
Thanks in advance for your input.
Generally timespans are either stored as (1) seconds elapsed or (2) start / end datetime. Seconds elapsed can be an integer or a float / double if you require it. You could be creative / crazy and store all times as milliseconds in which case you'd only need an integer.
If you are using PostgreSQL, you can use interval datatype. Otherwise, any integer (int4, int8) or number your database supports is OK. Of course, store values on a single unit of measure: seconds, minutes, milliseconds.
It all depends on how you intend to use it, but number of elapsed seconds (perhaps as a float if necessary) is certainly a favorite.
I think the 109.39 representing 1 min 9.39 sec is pretty silly. Unambiguous, sure, historical tradition maybe, but it's miserable to do computations with that format. (Not impossible, but fixing it during import sounds easy.)
I'd store time in a decimal format of some sort -- either an integer representing hundredths-of-a-second, as all your other times are displayed, or a data-base specific decimal-aware format.
Standard floating point representations might eventually lead you to wonder why a horse that ran two laps in 20.1 seconds each took 40.200035 seconds to run both laps combined.