I am trying to control a Industrial AC Servo motor using my XE166 device.
The controller interfaces with the servo controller using the PULSE and DIRECTION control.
To achieve a jerk-free motion I have been trying to create a S Curve motion profile (motor speed v/s time).
Calculating instantaneous speed is no problem as I know the distance moved by the motor per pulse, and the pulse duration.
I need to understand how to arrive at a mathematical equation that I could use, that would tell me what should be the nth pulses duration to have the speed profile as an S-Curve.
Since these must be a common requirement in any domain requiring motion control (Robotics, CNC, industrial) there must be some standard reference to do it.
With anticipation
I have just answered a similar question over on robotics.
The standard solution would be to use a low level velocity PID controller to generate the PULSE and DIRECTION signals given a velocity demand, and then have an outer supervisory controller, which would ramp the velocity demand (mm/s) up or down in accordance with your required acceleration (mm/s/s) and jolt (mm/s/s/s) control parameters.
Initially, I would suggest that you try a trapezoidal velocity profile (instantaneous change in acceleration), as I suggested in Control both Velocity and Position (Linear actuator) and then extend it to add the jolt/jerk term later.
Related
I read a tutorial of how to implement Seek behavior of steering behavior.The link is here
.And this is the graph to illustrate the algorithm:
.
I know the velocity, force, acceleration are all vector. But how come "steering" in formular "steering = desired_velocity - current_velocity" becomes into a force rather than a velocity in this article? why does this make sense? Does it mean that we can mix them in one calculation? Does that mean that a velocity vector add or subtract another velocity vector can product a force vector? if not , why the result is called "force"? I know how the steering behaviors work in AI. The key point of achieving this is that we can sum up all the different steering forces together and get a result total force. This total force can be used in formular "a = F/m" to get the acceleration. After that , we can use this acceleration to calculate new position and velocity of object in game loop update.
Based on my view , the "F" should be steering force , but I'm stucking on understanding the way to calculate it.
I am making an automated quadcopter: no radio transmitter-receiver and the quacopter flies on its own with pre-programmed orders.
All most all quadcopter implement PID on throttle/yaw/pitch/roll as these 4 axes are directly on an remote controller. However this is a bit inconvenient for an automated one without a controller. As an automated quadcopter without user input, velocities along x/y/z axis are of more concern, because:
keeping balance(yaw/pitch/roll=0) doesn't mean keeping still, as first, there will be some error in manufacture so it might still have some acceleration. Second, even there's no acceleration, it can have speed, causing it to drift in space. And as there's no user input, it can not fix the drift on it's own. Besides, if there's wind, it might got blow away, even it thinks it's balanced.
Orders are mostly given in "go to position(x,y)" or "keeping velocity x, fly above position(x,y) and start camera video capture." or so. These orders can't be translated into yaw/pitch/roll directly.
So basically I have two ideas:
Implement PID on yaw/pitch/roll/height and use a second PID loop to control velocity. The second PID loop take desired velocity and current velocity as input and output desired yaw/pitch/roll for first loop.
Implement PID directly on velocity. The pid loop take desired velocity and current velocity(by integrating acceleration from accelerometer) as input and output PWM width for 4 motors.
Have anyone tried idea 2? Will this work?
PID maps a measured value to a controlled value. If you can sense velocity reliably, you can use it to drive a PID. However, integrating an accelerometer won't give you a reliable enough velocity. Any sensing errors will compound through the integration and could grow your velocity estimate unbounded.
The R/P/Y PIDs on a quadcopter don't control the PWM to the motors directly, they control the roll, pitch, and yaw rates, then convert the rates to thrusts, then convert the thrust components for the various motors, then convert the thrusts into PWMs. See ArduPilor MotorMatrix code for hints.
You can put PIDs anywhere you like in the process between whatever is choosing the velocity and the motors, but you likely need some intermediate control/state variables between 'velocity' and 'PWM[1-4]' to balance things and have coordinated flight.
This question is about interpolating sine wave oscillators:
Assuming that amplitude and frequency trajectories for a sine wave are defined by corresponding breakpoint functions or read from user interface, the following few lines of C-code show a common sine-wave oscillator paradigm for synthesizing inNumberFrames of mono audio samples in real time, using linear interpolation:
// ... (pre-computing initial amplitude and phase values)...
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sinf(phasef) * ampf[frame];
phasef += osc->previousPartialPhaseIncrementf + df*frame;
if (phasef > TWO_PI) phasef -= TWO_PI;
}
// ... (storing current amplitude and phase values)...
While musically satisfying in general (although it can be performance optimized using pre-computed sine wavetables and pointer arithmetics), there are occasions when linear interpolation artifacts can be heard. I'd like to know is there a free example of cubic or bicubic interpolation of amplitude and phase oscillator instantaneous values? Concerning that the render thread has real-time priority (at least in CoreAudio), it is important to keep it lightweight, also to avoid running into too many lower-priority threading issues if interpolating outside the render thread. I'd be thankful to anyone pointing me at a working example of a (bi)cubic interpolation sine-wave oscillator algorithm in C, no matter how simple (or complex) it is.
Thanks in advance.
UPDATE:
Perhaps this illustration can clarify what was meant by values to be interpolated. Purple dots represent a frequency envelope breakpoint curve (connected by linear interpolation). Cyan dots represent a possibility of superimposed polynomial interpolations. First and last segments are off-scale:
Have a look at musicdsp.org where there is a post on (almost) Ready-to-use oscillators. The end of the post contains the method that you might be interested in with the following signature (by Ollie N.)
float Oscillator::UpdateWithCubicInterpolation( float frequency )
This is my first post on SO. I haven't already developed much code for embedded systems, but I have few problems and need help from more advanced programmers. I use following devices:
- LandTiger board with LPC1768 (Cortex M3) MCU,
- Digilent pmodACL with ADXL345 accelerometer (3 axis),
- Digilent pmodGYRO with L3G4200D gyroscope (3 axis).
I would like to get some information about device orientation, i.e. rotation angles over X, Y and Z axes. I've read that in order to achieve this I need to combine data from both accelerometer and gyroscope using Kallman filter or its simpler form i.e. complementary filter. I would like to know if it's possible to count roll, pitch and yaw from full range (0-360 degrees) using measurment data only from gyroscope and accelerometer (without magnetometer). I've also found some mathematical formulas (http://www.ewerksinc.com/refdocs/Tilt%20Sensing%20with%20LA.pdf and http://www.freescale.com/files/sensors/doc/app_note/AN3461.pdf) but they contain root squares in numerators/denominators so the information about proper quadrant of coordinate system is lost.
The question you are asking is a fairly frequent one, and is fairly complex, with many different solutions.
Although the title mentions only an accelerometer, the body of your post mentions a gyroscope, so I will assume you have both. In addition there are many steps to getting low-cost accelerometers and gyros to work, one of those is to do the voltage-to-measurement conversion. I will not cover that part.
First and foremost I will answer your question.
You asked if by 'counting' the gyro measurements you can estimate the attitude (orientation) of the device in Euler Angles.
Short answer: Yes, you could sum the scaled gyro measurements to get a very noisy estimate of the device rotation (actual radians turned, not attitude), however it would fail as soon as you rotate more than one axis. This will not suffice for most applications.
I will provide you with some general advise, specific knowledge and some example code that I have used before.
Firstly, you should not try to solve this problem by writing a program and testing with your IMU. You should start by writing a simulation using validated libraries, then validate your algorithm/program, and only then try to implement it with the IMU.
Secondly, you say you want to "count roll, pitch and yaw from full range (0-360 degrees)".
From this I assume you mean you want to be able to determine the Euler Angles that represent the attitude of the device with respect to an external stationary North-East-Down (NED) frame.
Your statement makes me think you are not familiar with representations of attitude, because as far as I know there are no Euler Angle representations with all 3 angles in the 0-360 range.
The application for which you want to use the attitude of the device will be important. If you are using Euler Angles you will not be able to accurately track the attitude of the device when large (greater than around 50 degrees) rotations are made on the roll or pitch axes, due to what is known as Gimbal Lock.
If you require the tracking of such motions then you will need to use a quaternion or Direction Cosine Matrix (DCM) representation of attitude.
Thirdly, as you have said you can use a Complimentary Filter or Kalman Filter variant (Extended Kalman Filter, Error-State Kalman Filter, Indirect Kalman Filter) to accurately track the attitude of the device by fusing the data from the accelerometer, gyro and a magnetometer. I suggest the Complimentary Filter described by Madgwick which is implemented in C, C# and MATLAB here. A Kalman Filter variant would be necessary if you wanted to track the position of the device, and had an additional sensor such as GPS.
For some example code of mine using accelerometer only to get Euler Angle pitch and roll see my answer to this other question.
The sensor module in my project consists of a rotating camera, that collects noisy information about moving objects in the surrounding environment.
The information consists of distance, angle and relative change of the moving objects..
The limiting view range of the camera makes it essential to rotate the camera periodically to update environment information...
I was looking for algorithms / ways to model these information, in order to be able to guess / predict / learn motion properties of these object..
My current proposed idea is to store last n snapshots of each object in a queue. I take weighted average of positions and velocities of moving object, but I think it is a poor method...
Can you state some titles that suit this case?
Thanks
Kalman {Extended, unscented, ... } filters and particle filters only after reading about Kalman filters.
Kalman filters learn and predict the correct data from noisy data with a Gaussian assumption, so it may be of use to you. If you need non-Gaussian methods, look at the particle filter.