I'm currently working on a temperature controller.
I have a Temperature_PID() function that returns the manipulated variable (which is the sum of the P, I, and D terms) but what do I do with this output?
The temperature is controlled by PWM, so 0% duty cycle = heater off and 100% duty cycle = heater on.
So far I tried
Duty_Cycle += Temperature_PID();
if(Duty_Cycle > 100) Duty_Cycle = 100;
else if(Duty_Cycle < 0) Duty_Cycle = 0;
This didn't work for me because the I term is basically makes this system very unstable. Imagine integrating an area, adding another small data point, and integrating the area again, and summing them. Over and over. That means each data point makes this control scheme exponentially worse.
The other thing I would like to try is
Duty_Cycle = Expected_Duty_Cycle + Temperature_PID();
where Expected_Duty_Cycle is what the temperature should be set to once the controller reaches a stable point and Temperature_PID() is 0. However, this also doesn't work because the Expected_Duty_Cycle would always be changing depending on the conditions of the heater, e.g. different weather.
So my question is what exactly do I do with the output of PID? I don't understand how to assign a duty cycle based on the PID output. Ideally this will stay at 100% duty cycle until the temperature almost reaches the set point and start dropping off to a lower duty cycle. But using my first method (with my I gain set to zero) it only starts lowering the duty cycle after it already overshoots.
This is my first post. Hope I find my answer. Thank you stackoverflow.
EDIT:
Here's my PID function.
double TempCtrl_PID(PID_Data *pid)
{
Thermo_Data tc;
double error, pTerm, iTerm, dTerm;
Thermo_Read(CHIP_TC1, &tc);
pid->last_pv = pid->pv;
pid->pv = Thermo_Temperature(&tc);
error = pid->sp - pid->pv;
if(error/pid->sp < 0.1)
pid->err_sum += error;
pTerm = pid->kp * error;
iTerm = pid->ki * pid->err_sum;
dTerm = pid->kd * (pid->last_pv - pid->pv);
return pTerm + iTerm + dTerm;
}
EDIT 2:
Never used this before so let me know if the link is broken.
https://picasaweb.google.com/113881440334423462633/January302013
Sorry, Excel is crashing on me when I try to rename axes or the title. Note: there isn't a fan in the system yet so I can't cool the heater as fast as I can get it to heat up, so it spends very little time below the set point compared to above.
The first picture is a simple on-off controller.
The second picture is my PD controller. As you can see, it takes a lot longer for the temperature to decrease because it doesn't subtract before the temperature overshoots, it waits until the temperature overshoots before subtracting from the duty cycle, and does so too slowly. How exactly do I tell my controller to lower the duty cycle before it hits the max temperature?
The output of the PID is the duty cycle. You must adjust kp, ki, and kd to put the PID output in the range of the Duty_Cycle, e.g., 0 to 100. It is usually a good idea to explicitly limit the output in the PID function itself.
You should "tune" your PID in simple steps.
Turn off the integral and derivative terms (set ki and kd to zero)
Slowly increase your kp until a 10% step change in the setpoint makes the output oscillate
Reduce kp by 30% or so, which should eliminate the oscillations
Set ki to a fraction of kp and adjust to get your desired tradeoff of overshoot versus time to reach setpoint
Hopefully, you will not need kd, but if you do, make it smaller still
Your PID controller output should be setting the value of the duty cycle directly.
Basically you are going to be controlling the heater settings based on the difference in the actual temperature versus the temperature setpoint.
You will need to adjust the values of the PID parameters to obtain the performance you are looking for.
First, set I and D to zero and put in a value for P, say 2 to start.
Change the setpoint and see what your response is. Increase P and make another setpoint change and see what happens. Eventually you will see the temperature oscillate consistently and never come to any stable value. This value is known as the "ulitmate gain". Pay attention to the frequency of the oscillation as well. Set P equal to half of the ultimate gain.
Start with a value of 1.2(ultimate gain)/(Oscillation Frequency) for I and change the setpoint. Adjust the values of P and I from those values to get to where you want to go, tracking the process and seeing if increasing or decreasing values improves things.
Once you have P and I you can work on D but depending on the process dynamics giving a value for D might make your life worse.
The Ziegler-Nichols method gives you some guidelines for PID values which should get you in the ballpark. From there you can make adjustments to get better performance.
You will have to weigh the options of having overshoot with the amount of time the temperature takes to reach the new setpoint. The faster the temperature adjusts the more overshoot you will have. To have no overshoot will increase that amount of time considerably.
A few suggestions:
You seem to be integrating twice. Once inside your TempCtrl_PID function and once outside. Duty_Cycle += . So now your P term is really I.
Start with only a P term and keep increasing it until the system becomes unstable. Then back off (e.g. use 1/2 to 1/4 the value where it becomes unstable) and start adding an I term. Start with very low values on the I term and then gradually increase. This process is a way of tuning the loop. Because the system will probably have a pretty long time constant this may be time consuming...
You can add some feed-forward as you suggest (expected duty cycle for a given setpoint - map it out by setting the duty cycle and letting the system stabilize.). It doesn't matter if that term isn't perfect since the loop will take out the remaining error. You can also simply add some constant bias to the duty cycle. Keep in mind a constant wouldn't really make any difference as the integrator will take it out. It will only affect a cold start.
Make sure you have some sort of fixed time base for this loop. E.g. make an adjustment every 10ms.
I would not worry about the D term for now. A PI controller should be good enough for most applications.
Related
I am implementing PID control using the standard libraries of the Teensy Atmega32u4. My control variable is PWM signal. My process variable is the current angular position of a DC motor that is interfaced with a 10kohm potentiometer with code that reads position ADC input on a scale of 0 to 270 degrees. The set point is a laser cut joystick whose handle is also attached to a 10kohm potentiometer that reads angular position in the same manner as the process variable.
My question is how to implement the integral portion of the control scheme. The integral term is given by:
Error = Set Point – Process Variable
Integral = Integral + Error
Control Variable = (Kp * Error) + (Ki * Integral)
But I am unsure as to how to calculate the integral portion. Do we need to account for the amount of time that has passed between samples or just the accumulated error and initialize the integral portion to zero, such that it is truly discretized? Since I'm using C, the Integral term can just be a global variable?
Am I on the right track?
Since Sample time (time after which PID is calculated) is always the same it does not matter whether u divide the integral term with sample time as this sample time will just act as a Ki constant but it is better to divide the integral term by sample time so that if u change the sample time the PID change with the sample time but it is not compulsory.
Here is the PID_Calc function I wrote for my Drone Robotics competition in python. Ignore "[index]" that was an array made by me to make my code generic.
def pid_calculator(self, index):
#calculate current residual error, the drone will reach the desired point when this become zero
self.Current_error[index] = self.setpoint[index] - self.drone_position[index]
#calculating values req for finding P,I,D terms. looptime is the time Sample_Time(dt).
self.errors_sum[index] = self.errors_sum[index] + self.Current_error[index] * self.loop_time
self.errDiff = (self.Current_error[index] - self.previous_error[index]) / self.loop_time
#calculating individual controller terms - P, I, D.
self.Proportional_term = self.Kp[index] * self.Current_error[index]
self.Derivative_term = self.Kd[index] * self.errDiff
self.Intergral_term = self.Ki[index] * self.errors_sum[index]
#computing pid by adding all indiviual terms
self.Computed_pid = self.Proportional_term + self.Derivative_term + self.Intergral_term
#storing current error in previous error after calculation so that it become previous error next time
self.previous_error[index] = self.Current_error[index]
#returning Computed pid
return self.Computed_pid
Here if the link to my whole PID script in git hub.
See if that help u.
Press the up button ig=f u like the answer and do star my Github repository i u like the script in github.
Thank you.
To add to previous answer, also consider the case of integral wind up in your code. There should be some mechanism to reset the integral term, if a windup occurs. Also select the largest available datatype to keep the integram(sum) term, to avoid integral overflow (typically long long). Also take care of integral overflow.
If you are selecting a sufficiently high sampling frequency, division can be avoided to reduce the computation involved. However, if you want to experiment with the sampling time, keep the sampling time in multiples of powers of two, so that the division can be accomplished through shift operations. For example, assume the sampling times selected be 100ms, 50ms, 25ms, 12.5ms. Then the dividing factors can be 1, 1<<1, 1<<2, 1<<4.
It is convenient to keep all the associated variables of the PID controller in a single struct, and then use this struct as parameters in functions operating on that PID. This way, the code will be modular, and many PID loops can simultaneously operate on the microcontroller, using the same code and just different instances of the struct. This approach is especially useful in large robotics projects, where you have many loops to control using a single CPU.
I have a setup with a Beaglebone Black which communicates over I²C with his slaves every second and reads data from them. Sometimes the I²C readout fails though, and I want to get statistics about these fails.
I would like to implement an algorithm which displays the percentage of successful communications of the last 5 minutes (up to 24 hours) and updates that value constantly. If I would implement that 'normally' with an array where I store success/no success of every second, that would mean a lot of wasted RAM/CPU load for a minor feature (especially if I would like to see the statistics of the last 24 hours).
Does someone know a good way to do that, or can anyone point me in the right direction?
Why don't you just implement a low-pass filter? For every successfull transfer, you push in a 1, for every failed one a 0; the result is a number between 0 and 1. Assuming that your transfers happen periodically, this works well -- and you just have to adjust the cutoff frequency of that filter to your desired "averaging duration".
However, I can't follow your RAM argument: assuming you store one byte representing success or failure per transfer, which you say happens every second, you end up with 86400B per day -- 85KB/day is really negligible.
EDIT Cutoff frequency is something from signal theory and describes the highest or lowest frequency that passes a low or high pass filter.
Implementing a low-pass filter is trivial; something like (pseudocode):
new_val = 1 //init with no failed transfers
alpha = 0.001
while(true):
old_val=new_val
success=do_transfer_and_return_1_on_success_or_0_on_failure()
new_val = alpha * success + (1-alpha) * old_val
That's a single-tap IIR (infinite impulse response) filter; single tap because there's only one alpha and thus, only one number that is stored as state.
EDIT2: the value of alpha defines the behaviour of this filter.
EDIT3: you can use a filter design tool to give you the right alpha; just set your low pass filter's cutoff frequency to something like 0.5/integrationLengthInSamples, select an order of 0 for the IIR and use an elliptic design method (most tools default to butterworth, but 0 order butterworths don't do a thing).
I'd use scipy and convert the resulting (b,a) tuple (a will be 1, here) to the correct form for this feedback form.
UPDATE In light of the comment by the OP 'determine a trend of which devices are failing' I would recommend the geometric average that Marcus Müller ꕺꕺ put forward.
ACCURATE METHOD
The method below is aimed at obtaining 'well defined' statistics for performance over time that are also useful for 'after the fact' analysis.
Notice that geometric average has a 'look back' over recent messages rather than fixed time period.
Maintain a rolling array of 24*60/5 = 288 'prior success rates' (SR[i] with i=-1, -2,...,-288) each representing a 5 minute interval in the preceding 24 hours.
That will consume about 2.5K if the elements are 64-bit doubles.
To 'effect' constant updating use an Estimated 'Current' Success Rate as follows:
ECSR = (t*S/M+(300-t)*SR[-1])/300
Where S and M are the count of errors and messages in the current (partially complete period. SR[-1] is the previous (now complete) bucket.
t is the number of seconds expired of the current bucket.
NB: When you start up you need to use 300*S/M/t.
In essence the approximation assumes the error rate was steady over the preceding 5 - 10 minutes.
To 'effect' a 24 hour look back you can either 'shuffle' the data down (by copy or memcpy()) at the end of each 5 minute interval or implement a 'circular array by keeping track of the current bucket index'.
NB: For many management/diagnostic purposes intervals of 15 minutes are often entirely adequate. You might want to make the 'grain' configurable.
Okay, this a bit of maths and DSP question.
Let us say I have 20,000 samples which I want to resample at a different pitch. Twice the normal rate for example. Using an Interpolate cubic method found here I would set my new array index values by multiplying the i variable in an iteration by the new pitch (in this case 2.0). This would also set my new array of samples to total 10,000. As the interpolation is going double the speed it only needs half the amount of time to finish.
But what if I want my pitch to vary throughout the recording? Basically I would like it to slowly increase from a normal rate to 8 times faster (at the 10,000 sample mark) and then back to 1.0. It would be an arc. My questions are this:
How do I calculate how many samples would the final audio track be?
How to create an array of pitch values that would represent this increase from 1.0 to 8.0 back to 1.0
Mind you this is not for live audio output, but for transforming recorded sound. I mainly work in C, but I don't know if that is relevant.
I know this probably is complicated, so please feel free to ask for clarifications.
To represent an increase from 1.0 to 8.0 and back, you could use a function of this form:
f(x) = 1 + 7/2*(1 - cos(2*pi*x/y))
Where y is the number of samples in the resulting track.
It will start at 1 for x=0, increase to 8 for x=y/2, then decrease back to 1 for x=y.
Here's what it looks like for y=10:
Now we need to find the value of y depending on z, the original number of samples (20,000 in this case but let's be general). For this we solve integral 1+7/2 (1-cos(2 pi x/y)) dx from 0 to y = z. The solution is y = 2*z/9 = z/4.5, nice and simple :)
Therefore, for an input with 20,000 samples, you'll get 4,444 samples in the output.
Finally, instead of multiplying the output index by the pitch value, you can access the original samples like this: output[i] = input[g(i)], where g is the integral of the above function f:
g(x) = (9*x)/2-(7*y*sin((2*pi*x)/y))/(4*pi)
For y=4444, it looks like this:
In order not to end up with aliasing in the result, you will also need to low pass filter before or during interpolation using either a filter with a variable transition frequency lower than half the local sample rate, or with a fixed cutoff frequency more than 16X lower than the current sample rate (for an 8X peak pitch increase). This will require a more sophisticated interpolator than a cubic spline. For best results, you might want to try a variable width windowed sinc kernel interpolator.
Given is an array of 320 elements (int16), which represent an audio signal (16-bit LPCM) of 20 ms duration. I am looking for a most simple and very fast method which should decide whether this array contains active audio (like speech or music), but not noise or silence. I don't need a very high quality of the decision, but it must be very fast.
It occurred to me first to add all squares or absolute values of the elements and compare their sum with a threshold, but such a method is very slow on my system, even if it is O(n).
You're not going to get much faster than a sum-of-squares approach.
One optimization that you may not be doing so far is to use a running total. That is, in each time step, instead of summing the squares of the last n samples, keep a running total and update that with the square of the most recent sample. To avoid your running total from growing and growing over time, add an exponential decay. In pseudocode:
decay_constant=0.999; // Some suitable value smaller than 1
total=0;
for t=1,...
// Exponential decay
total=total*decay_constant;
// Add in latest sample
total+=current_sample;
if total>threshold
// do something
end
end
Of course, you'll have to tune the decay constant and threshold to suit your application. If this isn't fast enough to run in real time, you have a seriously underpowered DSP...
You might try calculating two simple "statistics" - first would be spread (max-min). Silence will have very low spread. Second would be variety - divide the range of possible values into say 16 brackets (= value range) and as you go through the elements, determine in which bracket that element goes. Noise will have similar numbers for all brackets, whereas music or speech should prefer some of them while neglecting others.
This should be possible to do in just one pass through the array and you do not need complicated arithmetics, just some addition and comparison of values.
Also consider some approximation, for example take only each fourth value, thus reducing the number of checked elements to 80. For audio signal, this should be okay.
I did something like this a while back. After some experimentation I arrived at a solution that worked sufficiently well in my case.
I used the rate of change in the cube of the running average over about 120ms. When there is silence (only noise that is) the expression should be hovering around zero. As soon as the rate starts increasing over a couple of runs, you probably have some action going on.
rate = cur_avg^3 - prev_avg^3
I used a cube because the square just wasn't agressive enough. If the cube is to slow for you, try using the square and a bitshift instead. Hope this helps.
It is clearly that the complexity should be at least O(n). Probably some simple algorithms that calculate some value range are good for the moment but I would look for Voice Activity Detection on web and for related code samples.
Question: A PID controller has three parameters Kp, Ki and Kd which could affect the output performance. A differential driving robot is controlled by a PID controller. The heading information is sensed by a compass sensor. The moving forward speed is kept constant. The PID controller is able to control the heading information to follow a given direction. Explain the outcome on the differential driving robot performance when the three parameters are increased individually.
This is a question that has come up in a past paper but most likely won't show up this year but it still worries me. It's the only question that has me thinking for quite some time. I'd love an answer in simple terms. Most stuff i've read on the internet don't make much sense to me as it goes heavy into the detail and off topic for my case.
My take on this:
I know that the proportional term, Kp, is entirely based on the error and that, let's say, double the error would mean doubling Kp (applying proportional force). This therefore implies that increasing Kp is a result of the robot heading in the wrong direction so Kp is increased to ensure the robot goes on the right direction or at least tries to reduce the error as time passes so an increase in Kp would affect the robot in such a way to adjust the heading of the robot so it stays on the right path.
The derivative term, Kd, is based on the rate of change of the error so an increase in Kd implies that the rate of change of error has increased over time so double the error would result in double the force. An increase by double the change in the robot's heading would take place if the robot's heading is doubled in error from the previous feedback result. Kd causes the robot to react faster as the error increases.
An increase in the integral term, Ki, means that the error is increased over time. The integral accounts for the sum of error over time. Even a small increase in the error would increase the integral so the robot would have to head in the right direction for an equal amount of time for the integral to balance to zero.
I would appreciate a much better answer and it would be great to be confident for a similar upcoming question in the finals.
Side note: i've posted this question on the Robotics part earlier but seeing that the questions there are hardly ever noticed, i've also posted it here.
I would highly recommend reading this article PID Without a PhD it gives a great explanation along with some implementation details. The best part is the numerous graphs. They show you what changing the P, I, or D term does while holding the others constant.
And if you want real world Application Atmel provides example code on their site (for 8 bit MCU) that perfectly mirrors the PID without a PhD article. It follows this code from AVR's website exactly (they make the ATMega32p microcontroller chip on the Arduino UNO boards) PDF explanation and Atmel Code in C
But here is a general explanation the way I understand it.
Proportional: This is a proportional relationship between the error and the target. Something like Pk(target - actual) Its simply a scaling factor on the error. It decides how quickly the system should react to an error (if it is of any help, you can think of it like amplifier slew rate). A large value will quickly try to fix errors, and a slow value will take longer. With Higher values though, we get into an overshoot condition and that's where the next terms come into play
Integral: This is meant to account for errors in the past. In fact it is the sum of all past errors. This is often useful for things like a small dc/constant offset that a Proportional controller can't fix on its own. Imagine, you give a step input of 1, and after a while the output settles at .9 and its clear its not going anywhere. The integral portion will see this error is always ~.1 too small so it will add it back in, to hopefully bring control closer to 1. THis term usually helps to stabilize the response curve. Since it is taken over a long period of time, it should reduce noise and any fast changes (like those found in overshoot/ringing conditions). Because it's aggregate, it is a very sensitive measurement and is usually very small when compared to other terms. A lower value will make changes happen very slowly, and create a very smooth response(this can also cause "wind-up" see the article)
Derivative: This is supposed to account for the "future". It uses the slope of the most recent samples. Remember this is the slope, it has nothing to do with the position error(current-goal), it is previous measured position - current measured position. This is most sensitive to noise and when it is too high often causes ringing. A higher value encourages change since we are "amplifying" the slope.
I hope that helps. Maybe someone else can offer another viewpoint, but that's typically how I think about it.