lowest low after 52 week high afl - amibroker

Is there a way by using amibroker afl which can find the lowest low made by a stock after it has made a recent 52 week high?
Since this exploration will run on a watchlist, and stocks would have made 52 week high on different dates; how do I code to find lows after each different date of every stock in the watchlist.

As an example, you can start with this.
52 Week High AFL
Then use something like:
newLow = LLV(L, BarsSince(HI));

A bit late to the party but you wanna use
LowestSince(new_52_week_high , L, 1);
where
new_52_week_high = H > Ref(HHV(H, 250), -1));

Related

Really basic stuff! use for loop or while function to design a plan that choose minimum number of gifts

there are four gifts, which cost 100, 30, 5, 2. You have $89 to spend on them. now you want to spend as much your money as possible to buy minimum number of gifts (I know it doesn't sound reasonable at first). For example, in this case, starting from the most expensive one, you cannot afford it and thus you can only choose two $30 gifts. Now you have $29 left and you can do 5 $5 gifts and then 2 $2 gifts, totalling 9 gifts with $0 left. This is the plan. The desired output is 9. I need a set of codes that can generate this kind of plan no matter what is inputted at first. If i change the number to 40, 30, 8, 3 and $100, the best plan can still be outputted.
I got hints that we can input the number from big to small. For example, list1=[100,30,5,2,89] (the cost of gifts first and then the total money you have). And then select maximum amount of the most expensive gifts and see if there's any amount left for other gifts.
it is a beginner question so don't make it look too hard. Just use for loops and while loops (like you just started to learn).
no need to generate random numbers, you can use 100, 30, 5, 2 or other numbers you like.
Thx so much guys I need ur help!!!! kinda desperate now.

Recovery Time Calculation Supply vs Demand

I have a scenario where I am comparing the output of an item to its required output which is done weekly. I calculate the percentage of actual output against required output which can be seen in the Requirement Met % row below.
Where I'm struggling is with the recovery time metric. I'm trying to calculate the recovery time where Actual Output is short of the required output, and how long it takes to back fill a shortage.
So for week 1 in the image above, only 90% of the required output was fulfilled and therefore had a shortage of 10%. I'd then look at week 2 to fill week 1's shortage, however week 2 is also short 10% from its required output. So I'd then need to look at week 3 to fulfill week 1's shortage. This is possible as week 3 has more output than its required output (113%) and enough to fulfill week 1's shortage. This means it took an extra 2 weeks for week 1's requirement output to be met. The same would apply to week 2, it would look at the next week until its required output shortage can be met.
After week 3's required output and week 1's shortage are deducted from week 3's actual output, that leaves 3% which is not enough to fulfill week 2's required output. So I'd look at week 4 to see if it has enough leftover output to fulfill week 2's shortage...which it does. And so on and so on.
This would show the recovery time for each week.
I am working in alteryx but have not attempted to do this yet as I haven't worked out how to calculate this figure. If not alteryx I could do it in power BI but it's really how to calculate this metric that I'm after.
Thanks,
Tom

Efficiently solving coding puzzle with long array?

I have been having some difficulty thinking of an efficient way of solving this problem and making it work for long arrays. I am pretty sure that I don't know a programming technique needed to solve it. I would be thankful if you could help me!
The problem:Jack is a little businessman. He found out a way to earn money by buying electricity on days when it's cheap and selling it when it's much more expensive. He stores the electricity in a battery he made by himself. You are given N, the number of days Jack knows the cost of electricity for, and X, the amount of money Jack has available to invest in electricity, and in the next line you are given N days with the value(buy/sell value) of the electricity. Your job is to determine when Jack should buy and when he should sell electricity in order to earn as much money as possible and simply print the largest possible sum of money he can earn. The value of the electricity is always an integer but depending on the amount of money Jack has, the amount of electricity and money he has may be floating point numbers. I have a few ideas of how to solve the problem, but they're all very inefficient when it comes to long arrays.
Example:
Input:
4 10
4 10 5 20
Output:100, because he buys electricity on the 1st day and the sells it on the 2nd and buys it on the 3rd and sells it on the 4th day.
Example num. 2:
Input:
3 21
10 8 3
Output:21, because it's better if he doesn't buy/sell any electricity.
Example num. 3:
Input:
3 10
8 10 14
Output:17.5, because he buys electricity on the 1st day, but he sells it on the 3rd day.
On any day, Jack has either cash or electricity.
On any day other than the last, he buys electricity if he has cash, and the price on the next day is higher.
On any day, he sells electricity if he has electricity, and it is either the last day, or the price on the next day is lower.

Rolling Arrays as a possible solution to a rolling window in SAS

I am trying to calculate the dosage of a particular drug for a population to see if any member in this population is over a certain threshold for any 90 consecutive days. So to do this I am thinking that I am going to need to make a arraty that looks at the strengh of this drug over 90 days from an index and if they are all '1' then they get a 'pass', and somehow put this into a do loop to look at all of the potential 90 day windows for a year i=1...i=275 to see if a member at any point during the year has met the criteria. Thoughts?

Is there an easy way to get the percentage of successful reads of last x minutes?

I have a setup with a Beaglebone Black which communicates over I²C with his slaves every second and reads data from them. Sometimes the I²C readout fails though, and I want to get statistics about these fails.
I would like to implement an algorithm which displays the percentage of successful communications of the last 5 minutes (up to 24 hours) and updates that value constantly. If I would implement that 'normally' with an array where I store success/no success of every second, that would mean a lot of wasted RAM/CPU load for a minor feature (especially if I would like to see the statistics of the last 24 hours).
Does someone know a good way to do that, or can anyone point me in the right direction?
Why don't you just implement a low-pass filter? For every successfull transfer, you push in a 1, for every failed one a 0; the result is a number between 0 and 1. Assuming that your transfers happen periodically, this works well -- and you just have to adjust the cutoff frequency of that filter to your desired "averaging duration".
However, I can't follow your RAM argument: assuming you store one byte representing success or failure per transfer, which you say happens every second, you end up with 86400B per day -- 85KB/day is really negligible.
EDIT Cutoff frequency is something from signal theory and describes the highest or lowest frequency that passes a low or high pass filter.
Implementing a low-pass filter is trivial; something like (pseudocode):
new_val = 1 //init with no failed transfers
alpha = 0.001
while(true):
old_val=new_val
success=do_transfer_and_return_1_on_success_or_0_on_failure()
new_val = alpha * success + (1-alpha) * old_val
That's a single-tap IIR (infinite impulse response) filter; single tap because there's only one alpha and thus, only one number that is stored as state.
EDIT2: the value of alpha defines the behaviour of this filter.
EDIT3: you can use a filter design tool to give you the right alpha; just set your low pass filter's cutoff frequency to something like 0.5/integrationLengthInSamples, select an order of 0 for the IIR and use an elliptic design method (most tools default to butterworth, but 0 order butterworths don't do a thing).
I'd use scipy and convert the resulting (b,a) tuple (a will be 1, here) to the correct form for this feedback form.
UPDATE In light of the comment by the OP 'determine a trend of which devices are failing' I would recommend the geometric average that Marcus Müller ꕺꕺ put forward.
ACCURATE METHOD
The method below is aimed at obtaining 'well defined' statistics for performance over time that are also useful for 'after the fact' analysis.
Notice that geometric average has a 'look back' over recent messages rather than fixed time period.
Maintain a rolling array of 24*60/5 = 288 'prior success rates' (SR[i] with i=-1, -2,...,-288) each representing a 5 minute interval in the preceding 24 hours.
That will consume about 2.5K if the elements are 64-bit doubles.
To 'effect' constant updating use an Estimated 'Current' Success Rate as follows:
ECSR = (t*S/M+(300-t)*SR[-1])/300
Where S and M are the count of errors and messages in the current (partially complete period. SR[-1] is the previous (now complete) bucket.
t is the number of seconds expired of the current bucket.
NB: When you start up you need to use 300*S/M/t.
In essence the approximation assumes the error rate was steady over the preceding 5 - 10 minutes.
To 'effect' a 24 hour look back you can either 'shuffle' the data down (by copy or memcpy()) at the end of each 5 minute interval or implement a 'circular array by keeping track of the current bucket index'.
NB: For many management/diagnostic purposes intervals of 15 minutes are often entirely adequate. You might want to make the 'grain' configurable.

Resources