Weight assignment to define an objective function - c

I have a set of jobs with execution times (C1,C2...Cn) and deadlines (D1,D2,...Dn). Each job will complete its execution in some time, i.e,
response time (R1,R2,....Rn). However, there is a possibility that not every job will complete its execution before its deadline. So I define a variable called Slack for each job, i.e., (S1,S2,...Sn). Slack is basically the difference between the deadline and the response time of jobs, i.e.,
S1=D1-R1
S2=D2-R2, .. and so on
I have a set of slacks [S1,S2,S3,...Sn]. These slacks can be positive or negative depending on the deadline and completion time of tasks, i.e., D and R.
The problem is I need to define weights (W) for each job (or slack) such that the job with negative slack (i.e., R>D, jobs that miss deadlines) has more weight (W) than the jobs with positive slack and based on these weights and slacks I need to define an objective function that can be used to maximize the slack.
The problem doesn't seem to be a difficult one. However, I couldn't find a solution. Some help in this regard is much appreciated.
Thanks

This can often be done easily with variable splitting:
splus(i) - smin(i) = d(i) - r(i)
splus(i) ≥ 0, smin(i) ≥ 0
If we have a term in the objective so that we are minimizing:
sum(i, w1 * splus(i) + w2 * smin(i) )
this will work ok: we don't need to add the complementarity condition splus(i)*smin(i)=0.

Related

Generated unique id with 6 characters - handling when too much ids already used

​In my program you can book an item. This item has an id with 6 characters from 32 possible characters.
So my possibilities are 32^6. Every id must be unique.
func tryToAddItem {
if !db.contains(generateId()) {
addItem()
} else {
tryToAddItem()
}
}
For example 90% of my ids are used. So the probability that I call tryToAddItem 5 times is 0,9^5 * 100 = 59% isn't it?
So that is quite high. This are 5 database queries on a lot of datas.
When the probability is so high I want to implement a prefix „A-xxxxxx“.
What is a good condition for that? At which time do I will need a prefix?
In my example 90% ids were use. What is about the rest? Do I threw it away?
What is about database performance when I call tryToAddItem 5 times? I could imagine that this is not best practise.
For example 90% of my ids are used. So the probability that I call tryToAddItem 5 times is 0,9^5 * 100 = 59% isn't it?
Not quite. Let's represent the number of call you make with the random variable X, and let's call the probability of an id collision p. You want the probability that you make the call at most five times, or in general at most k times:
P(X≤k) = P(X=1) + P(X=2) + ... + P(X=k)
= (1-p) + (1-p)*p + (1-p)*p^2 +... + (1-p)*p^(k-1)
= (1-p)*(1 + p + p^2 + .. + p^(k-1))
If we expand this out all but two terms cancel and we get:
= 1- p^k
Which we want to be greater than some probability, x:
1 - p^k > x
Or with p in terms of k and x:
p < (1-x)^(1/k)
where you can adjust x and k for your specific needs.
If you want less than a 50% probability of 5 or more calls, then no more than (1-0.5)^(1/5) ≈ 87% of your ids should be taken.
First of all make sure there is an index on the id columns you are looking up. Then I would recommend thinking more in terms of setting a very low probability of a very bad event occurring. For example maybe making 20 calls slows down the database for too long, so we'd like to set the probability of this occurring to <0.1%. Using the formula above we find that no more than 70% of ids should be taken.
But you should also consider alternative solutions. Is remapping all ids to a larger space one time only a possibility?
Or if adding ids with prefixes is not a big deal then you could generate longer ids with prefixes for all new items going forward and not have to worry about collisions.
Thanks for response. I searched for alternatives and want show three possibilities.
First possibility: Create an UpcomingItemIdTable with 200 (more or less) valid itemIds. A task in the background can calculate them every minute (or what you need). So the action tryToAddItem will always get a valid itemId.
Second possibility
Is remapping all ids to a larger space one time only a possibility?
In my case yes. I think for other problems the answer will be: it depends.
Third possibility: Try to generate an itemId and when there is a collision try it again.
Possible collisions handling: Do some test before. Measure the time to generate itemIds when there are already 1000,10.000,100.000,1.000.000 etc. entries in the table. When the tryToAddItem method needs more than 100ms (or what you prefer) then increase your length from 6 to 7,8,9 characters.
Some thoughts
every request must be atomar
create an index on itemId
Disadvantages for long UUIDs in API: See https://zalando.github.io/restful-api-guidelines/#144
less usable, because...
-cannot be memorized and easily communicated by humans
-harder to use in debugging and logging analysis
-less convenient for consumer facing usage
-quite long: readable representation requires 36 characters and comes with higher memory and bandwidth consumption
-not ordered along their creation history and no indication of used id volume
-may be in conflict with additional backward compatibility support of legacy ids
[...]
TLDR: For my case every possibility is working. As so often it depends on the problem. Thanks for input.

Reducing the range of a hash function maintaining even distribution

I'm writing a cron-like job-dispatcher that runs jobs every minute (and other jobs every 5 minutes, etc.). However, instead of immediately dispatching all the jobs that run on a particular period at the top-of-the-minute, I want to spawn them evenly over their periods.
For example, if I have N jobs that run every P minutes, rather than spawn them all at P:00, I want to spawn the jobs evenly over the P*60 seconds, i.e., ceil(N/P*60) jobs per second. Hence, the spawn time for each job would be "skewed" somewhat later.
However, for each job J, I want J to be spawned at the same skew every time it's dispatched so that the time between spawns for J is constant (and matches its period).
Each job has various information associated with it, including several strings that vary for each job. My original thought was to calculate a hash-code, H, for one or more of the strings and mod it by P*60 to calculate a constant skew, S, for each job. As long as the strings associated with the job remain the same, the calculated skew would remain constant.
However, I would assume that S=H%(P*60) suffers from to problems similar to using rand() (uneven distribution that's biased towards lower numbers). However, I don't think the solutions presented there (to call rand() multiple times) would apply to my case where I'm using a hash-code because the hash function for a given job would always return the same hash.
So how can I get what I want? (I'm writing in C.)
Examples:
Suppose I have N every-minute jobs (a cron schedule of * * * * *). For N < 60 (let's say 2), then job-1 might be skewed to start at :23 (23 seconds past the minute) and job-2 might be skewed to start at :37. With so few jobs, it may not seem evenly distributed. However, as N approached 60, the "gaps" would fill in (assuming a perfect skew function) so that one job would be spawned every second. If N passed 60, some jobs spawned at some seconds would "double up." Similarly, as N approached 120, the "gaps" would again fill in so that two jobs would be spawned every second. And so on.
Supppose I have N every-five-minute jobs (a cron schedule of */5 * * * *). In "normal" cron, that means "every five minutes on the zeroth second on the fives." I instead want that to mean "every five minutes, but not necessarily (and most likely not) on the zeroth second of some minute, but the only guarantee is that the interval between spawns will be five minutes." So for example, a particular job might be spawned at 00:07:24, 00:12:24, 00:17:24, etc. As N approached 300, one job would be spawned per second.

Is there an easy way to get the percentage of successful reads of last x minutes?

I have a setup with a Beaglebone Black which communicates over I²C with his slaves every second and reads data from them. Sometimes the I²C readout fails though, and I want to get statistics about these fails.
I would like to implement an algorithm which displays the percentage of successful communications of the last 5 minutes (up to 24 hours) and updates that value constantly. If I would implement that 'normally' with an array where I store success/no success of every second, that would mean a lot of wasted RAM/CPU load for a minor feature (especially if I would like to see the statistics of the last 24 hours).
Does someone know a good way to do that, or can anyone point me in the right direction?
Why don't you just implement a low-pass filter? For every successfull transfer, you push in a 1, for every failed one a 0; the result is a number between 0 and 1. Assuming that your transfers happen periodically, this works well -- and you just have to adjust the cutoff frequency of that filter to your desired "averaging duration".
However, I can't follow your RAM argument: assuming you store one byte representing success or failure per transfer, which you say happens every second, you end up with 86400B per day -- 85KB/day is really negligible.
EDIT Cutoff frequency is something from signal theory and describes the highest or lowest frequency that passes a low or high pass filter.
Implementing a low-pass filter is trivial; something like (pseudocode):
new_val = 1 //init with no failed transfers
alpha = 0.001
while(true):
old_val=new_val
success=do_transfer_and_return_1_on_success_or_0_on_failure()
new_val = alpha * success + (1-alpha) * old_val
That's a single-tap IIR (infinite impulse response) filter; single tap because there's only one alpha and thus, only one number that is stored as state.
EDIT2: the value of alpha defines the behaviour of this filter.
EDIT3: you can use a filter design tool to give you the right alpha; just set your low pass filter's cutoff frequency to something like 0.5/integrationLengthInSamples, select an order of 0 for the IIR and use an elliptic design method (most tools default to butterworth, but 0 order butterworths don't do a thing).
I'd use scipy and convert the resulting (b,a) tuple (a will be 1, here) to the correct form for this feedback form.
UPDATE In light of the comment by the OP 'determine a trend of which devices are failing' I would recommend the geometric average that Marcus Müller ꕺꕺ put forward.
ACCURATE METHOD
The method below is aimed at obtaining 'well defined' statistics for performance over time that are also useful for 'after the fact' analysis.
Notice that geometric average has a 'look back' over recent messages rather than fixed time period.
Maintain a rolling array of 24*60/5 = 288 'prior success rates' (SR[i] with i=-1, -2,...,-288) each representing a 5 minute interval in the preceding 24 hours.
That will consume about 2.5K if the elements are 64-bit doubles.
To 'effect' constant updating use an Estimated 'Current' Success Rate as follows:
ECSR = (t*S/M+(300-t)*SR[-1])/300
Where S and M are the count of errors and messages in the current (partially complete period. SR[-1] is the previous (now complete) bucket.
t is the number of seconds expired of the current bucket.
NB: When you start up you need to use 300*S/M/t.
In essence the approximation assumes the error rate was steady over the preceding 5 - 10 minutes.
To 'effect' a 24 hour look back you can either 'shuffle' the data down (by copy or memcpy()) at the end of each 5 minute interval or implement a 'circular array by keeping track of the current bucket index'.
NB: For many management/diagnostic purposes intervals of 15 minutes are often entirely adequate. You might want to make the 'grain' configurable.

What is the difference between Q-learning and SARSA?

Although I know that SARSA is on-policy while Q-learning is off-policy, when looking at their formulas it's hard (to me) to see any difference between these two algorithms.
According to the book Reinforcement Learning: An Introduction (by Sutton and Barto). In the SARSA algorithm, given a policy, the corresponding action-value function Q (in the state s and action a, at timestep t), i.e. Q(st, at), can be updated as follows
Q(st, at) = Q(st, at) + α*(rt + γ*Q(st+1, at+1) - Q(st, at))
On the other hand, the update step for the Q-learning algorithm is the following
Q(st, at) = Q(st, at) + α*(rt + γ*maxa Q(st+1, a) - Q(st, at))
which can also be written as
Q(st, at) = (1 - α) * Q(st, at) + α * (rt + γ*maxa Q(st+1, a))
where γ (gamma) is the discount factor and rt is the reward received from the environment at timestep t.
Is the difference between these two algorithms the fact that SARSA only looks up the next policy value while Q-learning looks up the next maximum policy value?
TLDR (and my own answer)
Thanks to all those answering this question since I first asked it. I've made a github repo playing with Q-Learning and empirically understood what the difference is. It all amounts to how you select your next best action, which from an algorithmic standpoint can be a mean, max or best action depending on how you chose to implement it.
The other main difference is when this selection is happening (e.g., online vs offline) and how/why that affects learning. If you are reading this in 2019 and are more of a hands-on person, playing with a RL toy problem is probably the best way to understand the differences.
One last important note is that both Suton & Barto as well as Wikipedia often have mixed, confusing or wrong formulaic representations with regards to the next state best/max action and reward:
r(t+1)
is in fact
r(t)
When I was learning this part, I found it very confusing too, so I put together the two pseudo-codes from R.Sutton and A.G.Barto hoping to make the difference clearer.
Blue boxes highlight the part where the two algorithms actually differ. Numbers highlight the more detailed difference to be explained later.
TL;NR:
| | SARSA | Q-learning |
|:-----------:|:-----:|:----------:|
| Choosing A' | π | π |
| Updating Q | π | μ |
where π is a ε-greedy policy (e.g. ε > 0 with exploration), and μ is a greedy policy (e.g. ε == 0, NO exploration).
Given that Q-learning is using different policies for choosing next action A' and updating Q. In other words, it is trying to evaluate π while following another policy μ, so it's an off-policy algorithm.
In contrast, SARSA uses π all the time, hence it is an on-policy algorithm.
More detailed explanation:
The most important difference between the two is how Q is updated after each action. SARSA uses the Q' following a ε-greedy policy exactly, as A' is drawn from it. In contrast, Q-learning uses the maximum Q' over all possible actions for the next step. This makes it look like following a greedy policy with ε=0, i.e. NO exploration in this part.
However, when actually taking an action, Q-learning still uses the action taken from a ε-greedy policy. This is why "Choose A ..." is inside the repeat loop.
Following the loop logic in Q-learning, A' is still from the ε-greedy policy.
Yes, this is the only difference. On-policy SARSA learns action values relative to the policy it follows, while off-policy Q-Learning does it relative to the greedy policy. Under some common conditions, they both converge to the real value function, but at different rates. Q-Learning tends to converge a little slower, but has the capabilitiy to continue learning while changing policies. Also, Q-Learning is not guaranteed to converge when combined with linear approximation.
In practical terms, under the ε-greedy policy, Q-Learning computes the difference between Q(s,a) and the maximum action value, while SARSA computes the difference between Q(s,a) and the weighted sum of the average action value and the maximum:
Q-Learning: Q(st+1,at+1) = maxaQ(st+1,a)
SARSA: Q(st+1,at+1) = ε·meanaQ(st+1,a) + (1-ε)·maxaQ(st+1,a)
What is the difference mathematically?
As is already described in most other answers, the difference between the two updates mathematically is indeed that, when updating the Q-value for a state-action pair (St, At):
Sarsa uses the behaviour policy (meaning, the policy used by the agent to generate experience in the environment, which is typically epsilon-greedy) to select an additional action At+1, and then uses Q(St+1, At+1) (discounted by gamma) as expected future returns in the computation of the update target.
Q-learning does not use the behaviour policy to select an additional action At+1. Instead, it estimates the expected future returns in the update rule as maxA Q(St+1, A). The max operator used here can be viewed as "following" the completely greedy policy. The agent is not actually following the greedy policy though; it only says, in the update rule, "suppose that I would start following the greedy policy from now on, what would my expected future returns be then?".
What does this mean intuitively?
As mentioned in other answers, the difference described above means, using technical terminology, that Sarsa is an on-policy learning algorithm, and Q-learning is an off-policy learning algorithm.
In the limit (given an infinite amount of time to generate experience and learn), and under some additional assumptions, this means that Sarsa and Q-learning converge to different solutions / "optimal" policies:
Sarsa will converge to a solution that is optimal under the assumption that we keep following the same policy that was used to generate the experience. This will often be a policy with some element of (rather "stupid") randomness, like epsilon-greedy, because otherwise we are unable to guarantee that we'll converge to anything at all.
Q-Learning will converge to a solution that is optimal under the assumption that, after generating experience and training, we switch over to the greedy policy.
When to use which algorithm?
An algorithm like Sarsa is typically preferable in situations where we care about the agent's performance during the process of learning / generating experience. Consider, for example, that the agent is an expensive robot that will break if it falls down a cliff. We'd rather not have it fall down too often during the learning process, because it is expensive. Therefore, we care about its performance during the learning process. However, we also know that we need it to act randomly sometimes (e.g. epsilon-greedy). This means that it is highly dangerous for the robot to be walking alongside the cliff, because it may decide to act randomly (with probability epsilon) and fall down. So, we'd prefer it to quickly learn that it's dangerous to be close to the cliff; even if a greedy policy would be able to walk right alongside it without falling, we know that we're following an epsilon-greedy policy with randomness, and we care about optimizing our performance given that we know that we'll be stupid sometimes. This is a situation where Sarsa would be preferable.
An algorithm like Q-learning would be preferable in situations where we do not care about the agent's performance during the training process, but we just want it to learn an optimal greedy policy that we'll switch to eventually. Consider, for example, that we play a few practice games (where we don't mind losing due to randomness sometimes), and afterwards play an important tournament (where we'll stop learning and switch over from epsilon-greedy to the greedy policy). This is where Q-learning would be better.
There's an index mistake in your formula for Q-Learning.
Page 148 of Sutton and Barto's.
Q(st,at) <-- Q(st,at) + alpha * [r(t+1) + gamma * max Q(st+1,a) -
Q(st,at) ]
The typo is in the argument of the max:
the indexes are st+1 and a,
while in your question they are st+1 and at+1 (these are correct for SARSA).
Hope this helps a bit.
In Q-Learning
This is your:
Q-Learning: Q(St,At) = Q(St,At) + a [ R(t+1) + discount * max Q(St+1,At) - Q(St,At) ]
should be changed to
Q-Learning: Q(St,At) = Q(St,At) + a [ R(t+1) + discount * max Q(St+1,a) - Q(St,At) ]
As you said, you have to find the maximum Q-value for the update eq. by changing the a, Then you will have a new Q(St,At). CAREFULLY, the a that give you the maximum Q-value is not the next action. At this stage, you only know the next state (St+1), and before going to next round, you want to update the St by the St+1 (St <-- St+1).
For each loop;
choose At from the St using the Q-value
take At and observe Rt+1 and St+1
Update Q-value using the eq.
St <-- St+1
Until St is terminal
The only difference between SARSA and Qlearning is that SARSA takes the next action based on the current policy while qlearning takes the action with maximum utility of next state
I didn't read any book just I see the implication of them
q learning just focus on the (action grid)
SARSA learning just focus on the (state to state) and observe the action list of s and s' and then update the (state to state grid)
Both SARSA and Q-learnig agents follow e-greedy policy to interact with environment.
SARSA agent updates its Q-function using the next timestep Q-value with whatever action the policy provides(mostly still greedy, but random action also accepted). The policy being executed and the policy being updated towards are the same.
Q-learning agent updates its Q-function with only the action brings the maximum next state Q-value(total greedy with respect to the policy). The policy being executed and the policy being updated towards are different.
Hence, SARSA is on-policy, Q-learning is off-policy.

Aggregating Probabilistic Plans

I'm trying to create a simple STRIPS-based planner. I've completed the basic functionality to compute separate probabilistic plans that will reach a goal, but now I'm trying to determine how to aggregate these plans based on their initial action, to determine what the "overall" best action is at time t0.
Consider the following example. Utility, bounded between 0 and 1, represents how well the plan accomplishes the goal. CF, also bounded between 0 and 1, represents the certainty-factor, or the probability that performing the plan will result in the given utility.
Plan1: CF=0.01, Utility=0.7
Plan2: CF=0.002, Utility=0.9
Plan3: CF=0.03, Utility=0.03
If all three plans, which are mutually exclusive, start with the action A1, how should I aggregate them to determine the overall "fitness" for using action A1? My first thought is to sum the certainty-factors, and multiple that by the average of the utilities. Does that seem correct?
So my current result would look like:
fitness(A1) = (0.01 + 0.002 + 0.03) * (0.7 + 0.9 + 0.03)/3. = 0.02282
Or should I calculate the individual likely utilities, and average those?
fitness(A1) = (0.01*0.7 + 0.002*0.9 + 0.03*0.03)/3. = 0.00323
Is there a more theoretically sound way?
If you take action A1, then you have to decide which of the 3 plans to follow, which are mutually exclusive. At that point we can calculate that the expected utility of plan 1 is
E[plan1] = Prob[plan1 succeeds]*utility-for-success
+ Prob[plan1 fails]*utility-of-failure
= .01*.7 + .99*0 //I assume 0
= .007
Similarly for the other 2 plans. But, since you can only choose one plan, the real expected utility (which I think is what you mean by "fitness") from taking action A1 is
max(E[plan1],E[plan2],E[plan3]) = fitness(A1)
I think that the fitness function you are talking about would have to also consider all the plans that don't have A1 as the first action. They could be all be really good, in which case doing A1 is a bad idea, or they might be terrible in which case doing A1 looks like a good move.
Looking at your ideas, the second one makes much more sense to me. It calculates the expected utility of picking a plan uniformly at random from all the plans that start with A1. This is under the assumption that a plan either achieves the given utility or fails completely. For example, the first plan gets utility=0.01 with probability 0.7 and gets utility=0 with probability 0.3. This seems like a reasonable assumption; it's all you can do unless you have more data to work with.
So here's my suggestion: Let A1 be all plans starting with A1 and ~A1 be all plans not-starting with A1. Then
F(A1) = fitness(A1) / fitness(~A1)
where fitness is as you defined it in the second example.
This should give you a ratio of expected utility for plans starting with A1 against ones that don't. If it's greater than one, A1 looks like a good action.
If you're interested in probabilistic planning, you should have a look at the POMDP model and algorithms like value iteration.
Edit:
Actually, I should have pointed you to Markov Decision Process (without the PO). I'm sorry.
What you should probably do for your problem is to maximize the expected utility. Call the fitness this.

Resources