MATLAB Broadcasting for unifrnd - arrays

I am coming back from NumPy to MATLAB and don't quite have the hang of the broadcasting here.
Can someone explain to me why the first one fails and the second (more explicit works)?
After my understanding, x0 and x1 are both 1x2 arrays and it should be possible to extend them to 5x2.
n_a = 5;
n_b = 2;
x0 = [1, 2];
x1 = [11, 22];
% c = unifrnd(x0, x1, [n_a, n_b])
% Error using unifrnd
% Size information is inconsistent.
% c = unifrnd(x0, x1, [n_a, 1]) % also fails
c = unifrnd(ones(n_a, n_b) .* x0, ones(n_a, n_b) .* x1, [n_a, n_b])
% works

There is a size verification within the unifrnd function (you can type open unifrnd in the command line to see the function code). It sends the error if the third input is not coherent with the size of the first 2 inputs:
[err, sizeOut] = internal.stats.statsizechk(2,a,b,varargin{:});
if err > 0
error(message('stats:unifrnd:InputSizeMismatch'));
end
If you skip this part, though (as in if you create a custom function without the size check), both your function calls that fail will actually work, due to implicit expansion. The real question is whether calling the function this way makes sense.
TL;DR : It is not the broadcasting that fails, it is the function that does not allow you these sets of inputs

unifrnd essentially calls rand and applies scaling and shifting to the desired interval. So you can use rand and do the scaling and shifting manually, which allows you to employ broadcasting (singleton expansion):
c = x0 + (x1-x0).*rand(n_a, n_b);

Related

Vectorise circshift array

I have a for loop that shifts a signal over a certain amount and appends it to an array. How can I vectorize the circshift section so I don't need to use the for loop?
fs_rate=10
len_of_sig=1; %length of signal in seconds
t=linspace(0,len_of_sig,fs_rate*len_of_sig);
y=.5*sin(2*pi*1*t);
for aa=1:length(y)
y_new(aa,:)=circshift(y,[1,aa+3]); %shifts signal and appends to array
end
plot(t,y_new)
PS: I'm using Octave 4.2.2 Ubuntu 18.04 64bit
You can use the gallery to create a circular matrix after using circshift for your base shift:
base_shift = 4;
fs_rate = 10;
len_of_sig = 1; # length of signal in seconds
t = linspace (0, len_of_sig, fs_rate*len_of_sig);
y = .5 * sin (2*pi*1*t);
y = gallery ("circul", circshift (y, [1 base_shift]));
Or if you want to know how it was implemented, take a look at its source code type gallery

Generate a sine signal with time dependent frequency in C

I want to generate a sine signal with a time dependent frequency that varies periodically between fmin and fmax with a frequency f0 in C. Mathematically, this can be described by
y(t)=1/2*(1+sin(2*Pi*(fmin*t + (fmax-fmin)*1/2*(t - 1/2/Pi/f0*cos(2*Pi*f0*t) + 1/2/Pi/f0))))
Because I want to use this on a microcontroller, I want to reduce the use of floating-point numbers in order to save memory and increase performance.
This is also a reason why I cannot use the standard sine function. To calculate the sine function, I approximate it with a parabola. Therefore, I use the following code:
Input range [0R 2πR] mapped to [0 to 1024].
Output range [-1.0 1.0] mapped to [-512 512].
int_fast16_t sin_bam(int angle_bam) {
angle_bam %= 1024;
if (angle_bam < 0)
angle_bam += 1024;
uint_fast16_t sub_angle = angle_bam % (1024u / 2);
int_fast16_t sn = (uint32_t) (sub_angle*(1024/2 - sub_angle) / 128);
if (angle_bam & 512) {
sn = -sn;
}
return sn;
}
My first approach to implement the signal above was
y(i) = sin_bam(fmin*i+(fmax-fmin)*(i-2/f0*sin_bam((int) (f0*i+1024/4))+1024/f0))+512.
For values of approximately f0 >= 1, this works well, but for smaller values my code seems to break, e.g. for f0 = 0.1 the generated signal becomes very "unsmooth" each time it reaches a frequency minimum.
Here a sample output:
I assume that the problem could be the integer implementation of the sine function as for f0 = 0.1 it reads
sin_bam((int) (0.1*i+1024/4)).
That means, for example for values of i between 0 and 9, sin_bam((int) (0.1*i+1024/4)) delivers the same output.
My first idea to solve this issue was to increase the angular resolution of the sine function, but unfortunately this didn't help.
Is there any logical error in my algorithm or does anybody have a better idea to implement this?

Configuring and limiting output of PI controller

I have implemented simple PI controller, code is as follows:
PI_controller() {
// handling input value and errors
previous_error = current_error;
current_error = 0 - input_value;
// PI regulation
P = current_error //P is proportional value
I += previous_error; //I is integral value
output = Kp*P + Ki*I; //Kp and Ki are coeficients
}
Input value is always between -π and +π.
Output value must be between -4000 and +4000.
My question is - how to configure and (most importantly) limit the PI controller properly.
Too much to comment but not a definitive answer. What is "a simple PI controller"? And "how long is a piece of string"? I don't see why you (effectively) code
P = (current_error = 0 - input_value);
which simply negates the error of -π to π. You then aggregate the error with
I += previous_error;
but haven't stated the cumulative error bounds, and then calculate
output = Kp*P + Ki*I;
which must be -4000 <= output <= 4000. So you are looking for values of Kp and Ki that keep you within bounds, or perhaps don't keep you within bounds except in average conditions.
I suggest an empirical solution. Try a series of runs, filing the results, stepping the values of Kp and Ki by 5 steps each, first from extreme neg to pos values. Limit the output as you stated, counting the number of results that break the limit.
Next, halve the range of one of Kp and Ki and make a further informed choice as to which one to limit. And so on. "Divide and conquer".
As to your requirement "how to limit the PI controller properly", are you sure that 4000 is the limit and not 4096 or even 4095?
if (output < -4000) output = -4000;
if (output > 4000) output = 4000;
To configure your Kp and Ki you really should analyze the frequency response of your system and design your PI to give the desired response. To simply limit the output decide if you need to freeze the integrator, or just limit the immediate output. I'd recommend freezing the integrator.
I_tmp = previous_error + I;
output_tmp = Kp*P + Ki*I_tmp;
if( output_tmp < -4000 )
{
output = -4000;
}
else if( output_tmp > 4000 )
{
output = 4000;
}
else
{
I = I_tmp;
output = output_tmp;
}
That's not a super elegant, vetted algorithm, but it gives you an idea.
If I understand your question correctly you are asking about anti windup for your integrator.
There are more clever ways to to it, but a simple
if ( abs (I) < x)
{
I += previous_error;
}
will prevent windup of the integrator.
Then you need to figure out x, Kp and Ki so that abs(x*Ki) + abs(3.14*Kp) < 4000
[edit] Off cause as macduff states, you first need to analyse your system and choose the korrect Ki and Kp, x is the only really free variable in the above equation.

CORDIC Arcsine implementation fails

I have recently implemented a library of CORDIC functions to reduce the required computational power (my project is based on a PowerPC and is extremely strict in its execution time specifications). The language is ANSI-C.
The other functions (sin/cos/atan) work within accuracy limits both in 32 and in 64 bit implementations.
Unfortunately, the asin() function fails systematically for certain inputs.
For testing purposes I have implemented an .h file to be used in a simulink S-Function. (This is only for my convenience, you can compile the following as a standalone .exe with minimal changes)
Note: I have forced 32 iterations because I am working in 32 bit precision and the maximum possible accuracy is required.
Cordic.h:
#include <stdio.h>
#include <stdlib.h>
#define FLOAT32 float
#define INT32 signed long int
#define BIT_XOR ^
#define CORDIC_1K_32 0x26DD3B6A
#define MUL_32 1073741824.0F /*needed to scale float -> int*/
#define INV_MUL_32 9.313225746E-10F /*needed to scale int -> float*/
INT32 CORDIC_CTAB_32 [] = {0x3243f6a8, 0x1dac6705, 0x0fadbafc, 0x07f56ea6, 0x03feab76, 0x01ffd55b, 0x00fffaaa, 0x007fff55,
0x003fffea, 0x001ffffd, 0x000fffff, 0x0007ffff, 0x0003ffff, 0x0001ffff, 0x0000ffff, 0x00007fff,
0x00003fff, 0x00001fff, 0x00000fff, 0x000007ff, 0x000003ff, 0x000001ff, 0x000000ff, 0x0000007f,
0x0000003f, 0x0000001f, 0x0000000f, 0x00000008, 0x00000004, 0x00000002, 0x00000001, 0x00000000};
/* CORDIC Arcsine Core: vectoring mode */
INT32 CORDIC_asin(INT32 arc_in)
{
INT32 k;
INT32 d;
INT32 tx;
INT32 ty;
INT32 x;
INT32 y;
INT32 z;
x=CORDIC_1K_32;
y=0;
z=0;
for (k=0; k<32; ++k)
{
d = (arc_in - y)>>(31);
tx = x - (((y>>k) BIT_XOR d) - d);
ty = y + (((x>>k) BIT_XOR d) - d);
z += ((CORDIC_CTAB_32[k] BIT_XOR d) - d);
x = tx;
y = ty;
}
return z;
}
/* Wrapper function for scaling in-out of cordic core*/
FLOAT32 asin_wrap(FLOAT32 arc)
{
return ((FLOAT32)(CORDIC_asin((INT32)(arc*MUL_32))*INV_MUL_32));
}
This can be called in a manner similar to:
#include "Cordic.h"
#include "math.h"
void main()
{
y1 = asin_wrap(value_32); /*my implementation*/
y2 = asinf(value_32); /*standard math.h for comparison*/
}
The results are as shown:
Top left shows the [-1;1] input over 2000 steps (0.001 increments), bottom left the output of my function, bottom right the standard output and top right the difference of the two outputs.
It is immediate to see that the error is not within 32 bit accuracy.
I have analysed the steps performed (and the intermediate results) by my code and it seems to me that at a certain point the value of y is "close enough" to the initial value of arc_in and what could be related to a bit-shift causes the solution to diverge.
My questions:
I am at a loss, is this error inherent in the CORDIC implementation or have I made a mistake in the implementation? I was expecting the decrease of accuracy near the extremes, but those spikes in the middle are quite unexpected. (the most notable ones are just beyond +/- 0.6, but even removed these there are more at smaller values, albeit not as pronounced)
If it is something part of the CORDIC implementation, are there known workarounds?
EDIT:
Since some comment mention it, yes, I tested the definition of INT32, even writing
#define INT32 int32_T
does not change the results by the slightest amount.
The computation time on the target hardware has been measured by hundreds of repetitions of block of 10.000 iterations of the function with random input in the validity range. The observed mean results (for one call of the function) are as follows:
math.h asinf() 100.00 microseconds
CORDIC asin() 5.15 microseconds
(apparently the previous test had been faulty, a new cross-test has obtained no better than an average of 100 microseconds across the validity range)
I apparently found a better implementation. It can be downloaded in matlab version here and in C here. I will analyse more its inner workings and report later.
To review a few things mentioned in the comments:
The given code outputs values identical to another CORDIC implementation. This includes the stated inaccuracies.
The largest error is as you approach arcsin(1).
The second largest error is that the values of arcsin(0.60726) to arcsin(0.68514) all return 0.754805.
There are some vague references to inaccuracies in the CORDIC method for some functions including arcsin. The given solution is to perform "double-iterations" although I have been unable to get this to work (all values give a large amount of error).
The alternate CORDIC implemention has a comment /* |a| < 0.98 */ in the arcsin() implementation which would seem to reinforce that there is known inaccuracies close to 1.
As a rough comparison of a few different methods consider the following results (all tests performed on a desktop, Windows7 computer using MSVC++ 2010, benchmarks timed using 10M iterations over the arcsin() range 0-1):
Question CORDIC Code: 1050 ms, 0.008 avg error, 0.173 max error
Alternate CORDIC Code (ref): 2600 ms, 0.008 avg error, 0.173 max error
atan() CORDIC Code: 2900 ms, 0.21 avg error, 0.28 max error
CORDIC Using Double-Iterations: 4700 ms, 0.26 avg error, 0.917 max error (???)
Math Built-in asin(): 200 ms, 0 avg error, 0 max error
Rational Approximation (ref): 250 ms, 0.21 avg error, 0.26 max error
Linear Table Lookup (see below) 100 ms, 0.000001 avg error, 0.00003 max error
Taylor Series (7th power, ref): 300 ms, 0.01 avg error, 0.16 max error
These results are on a desktop so how relevant they would be for an embedded system is a good question. If in doubt, profiling/benchmarking on the relevant system would be advised. Most solutions tested don't have very good accuracy over the range (0-1) and all but one are actually slower than the built-in asin() function.
The linear table lookup code is posted below and is my usual method for any expensive mathematical function when speed is desired over accuracy. It simply uses a 1024 element table with linear interpolation. It seems to be both the fastest and most accurate of all methods tested, although the built-in asin() is not much slower really (test it!). It can easily be adjusted for more or less accuracy by changing the size of the table.
// Please test this code before using in anything important!
const size_t ASIN_TABLE_SIZE = 1024;
double asin_table[ASIN_TABLE_SIZE];
int init_asin_table (void)
{
for (size_t i = 0; i < ASIN_TABLE_SIZE; ++i)
{
float f = (float) i / ASIN_TABLE_SIZE;
asin_table[i] = asin(f);
}
return 0;
}
double asin_table (double a)
{
static int s_Init = init_asin_table(); // Call automatically the first time or call it manually
double sign = 1.0;
if (a < 0)
{
a = -a;
sign = -1.0;
}
if (a > 1) return 0;
double fi = a * ASIN_TABLE_SIZE;
double decimal = fi - (int)fi;
size_t i = fi;
if (i >= ASIN_TABLE_SIZE-1) return Sign * 3.14159265359/2;
return Sign * ((1.0 - decimal)*asin_table[i] + decimal*asin_table[i+1]);
}
The "single rotate" arcsine goes badly wrong when the argument is just greater than the initial value of 'x', where that is the magical scaling factor -- 1/An ~= 0.607252935 ~= 0x26DD3B6A.
This is because, for all arguments > 0, the first step always has y = 0 < arg, so d = +1, which sets y = 1/An, and leaves x = 1/An. Looking at the second step:
if arg <= 1/An, then d = -1, and the steps which follow converge to a good answer
if arg > 1/An, then d = +1, and this step moves further away from the right answer, and for a range of values a little bigger than 1/An, the subsequent steps all have d = -1, but are unable to correct the result :-(
I found:
arg = 0.607 (ie 0x26D91687), relative error 7.139E-09 -- OK
arg = 0.608 (ie 0x26E978D5), relative error 1.550E-01 -- APALLING !!
arg = 0.685 (ie 0x2BD70A3D), relative error 2.667E-04 -- BAD !!
arg = 0.686 (ie 0x2BE76C8B), relative error 1.232E-09 -- OK, again
The descriptions of the method warn about abs(arg) >= 0.98 (or so), and I found that somewhere after 0.986 the process fails to converge and the relative error jumps to ~5E-02 and hits 1E-01 (!!) at arg=1 :-(
As you did, I also found that for 0.303 < arg < 0.313 the relative error jumps to ~3E-02, and reduces slowly until things return to normal. (In this case step 2 overshoots so far that the remaining steps cannot correct it.)
So... the single rotate CORDIC for arcsine looks rubbish to me :-(
Added later... when I looked even closer at the single rotate CORDIC, I found many more small regions where the relative error is BAD...
...so I would not touch this as a method at all... it's not just rubbish, it's useless.
BTW: I thoroughly recommend "Software Manual for the Elementary Functions", William Cody and William Waite, Prentice-Hall, 1980. The methods for calculating the functions are not so interesting any more (but there is a thorough, practical discussion of the relevant range-reductions required). However, for each function they give a good test procedure.
The additional source I linked at the end of the question apparently contains the solution.
The proposed code can be reduced to the following:
#define M_PI_2_32 1.57079632F
#define SQRT2_2 7.071067811865476e-001F /* sin(45°) = cos(45°) = sqrt(2)/2 */
FLOAT32 angles[] = {
7.8539816339744830962E-01F, 4.6364760900080611621E-01F, 2.4497866312686415417E-01F, 1.2435499454676143503E-01F,
6.2418809995957348474E-02F, 3.1239833430268276254E-02F, 1.5623728620476830803E-02F, 7.8123410601011112965E-03F,
3.9062301319669718276E-03F, 1.9531225164788186851E-03F, 9.7656218955931943040E-04F, 4.8828121119489827547E-04F,
2.4414062014936176402E-04F, 1.2207031189367020424E-04F, 6.1035156174208775022E-05F, 3.0517578115526096862E-05F,
1.5258789061315762107E-05F, 7.6293945311019702634E-06F, 3.8146972656064962829E-06F, 1.9073486328101870354E-06F,
9.5367431640596087942E-07F, 4.7683715820308885993E-07F, 2.3841857910155798249E-07F, 1.1920928955078068531E-07F,
5.9604644775390554414E-08F, 2.9802322387695303677E-08F, 1.4901161193847655147E-08F, 7.4505805969238279871E-09F,
3.7252902984619140453E-09F, 1.8626451492309570291E-09F, 9.3132257461547851536E-10F, 4.6566128730773925778E-10F};
FLOAT32 arcsin_cordic(FLOAT32 t)
{
INT32 i;
INT32 j;
INT32 flip;
FLOAT32 poweroftwo;
FLOAT32 sigma;
FLOAT32 sign_or;
FLOAT32 theta;
FLOAT32 x1;
FLOAT32 x2;
FLOAT32 y1;
FLOAT32 y2;
flip = 0;
theta = 0.0F;
x1 = 1.0F;
y1 = 0.0F;
poweroftwo = 1.0F;
/* If the angle is small, use the small angle approximation */
if ((t >= -0.002F) && (t <= 0.002F))
{
return t;
}
if (t >= 0.0F)
{
sign_or = 1.0F;
}
else
{
sign_or = -1.0F;
}
/* The inv_sqrt() is the famous Fast Inverse Square Root from the Quake 3 engine
here used with 3 (!!) Newton iterations */
if ((t >= SQRT2_2) || (t <= -SQRT2_2))
{
t = 1.0F/inv_sqrt(1-t*t);
flip = 1;
}
if (t>=0.0F)
{
sign_or = 1.0F;
}
else
{
sign_or = -1.0F;
}
for ( j = 0; j < 32; j++ )
{
if (y1 > t)
{
sigma = -1.0F;
}
else
{
sigma = 1.0F;
}
/* Here a double iteration is done */
x2 = x1 - (sigma * poweroftwo * y1);
y2 = (sigma * poweroftwo * x1) + y1;
x1 = x2 - (sigma * poweroftwo * y2);
y1 = (sigma * poweroftwo * x2) + y2;
theta += 2.0F * sigma * angles[j];
t *= (1.0F + poweroftwo * poweroftwo);
poweroftwo *= 0.5F;
}
/* Remove bias */
theta -= sign_or*4.85E-8F;
if (flip)
{
theta = sign_or*(M_PI_2_32-theta);
}
return theta;
}
The following is to be noted:
It is a "Double-Iteration" CORDIC implementation.
The angles table thus differs in construction from the old table.
And the computation is done in floating point notation, this will cause a major increase in computation time on the target hardware.
A small bias is present in the output, removed via the theta -= sign_or*4.85E-8F; passage.
The following picture shows the absolute (left) and relative errors (right) of the old implementation (top) vs the implementation contained in this answer (bottom).
The relative error is obtained only by dividing the CORDIC output with the output of the built-in math.h implementation. It is plotted around 1 and not 0 for this reason.
The peak relative error (when not dividing by zero) is 1.0728836e-006.
The average relative error is 2.0253509e-007 (almost in accordance to 32 bit accuracy).
For convergence of iterative process it is necessary that any "wrong" i-th
iteration could be "corrected" in the subsequent (i+1)-th, (i+2)-th, (i+3)-th,
etc. etc. iterations. Or, in other words, at least a half of the "wrong"
i-th iteration could be corrected in the next (i+1)-th iteration.
For atan(1/2^i) this condition is satisfied, i.e.:
atan(1/2^(i+1)) > 1/2*atan(1/2^i)
Read more at
http://cordic-bibliography.blogspot.com/p/double-iterations-in-cordic.html
and:
http://baykov.de/CORDIC1972.htm
(note I'm the author of those pages)

2D Deconvolution using FFT in Matlab Problems

I have convoluted an image I created in matlab with a 2D Gaussian function which I have also defined in matlab and now I am trying to deconvolve the resultant matrix to see if I get the 2D Gaussian function back using the fft2 and ifft2 commands. However the matrix I get as a result is incorrect (to my knowledge). Here is the code for what I have done thus far:
% Code for input image (img) [300x300 array]
N = 100;
t = linspace(0,2*pi,50);
r = (N-10)/2;
circle = poly2mask(r*cos(t)+N/2+0.5, r*sin(t)+N/2+0.5,N,N);
img = repmat(circle,3,3);
% Code for 2D Gaussian Function with c = 0 sig = 1/64 (Z) [300x300 array]
x = linspace(-3,3,300);
y = x';
[X Y] = meshgrid(x,y);
Z = exp(-((X.^2)+(Y.^2))/(2*1/64));
% Code for 2D Convolution of img with Z (C) [599x599 array]
C = conv2(img,Z);
% I have tested that this convolution is correct using cross section profile vectors for img and C and the resulting x-y plots are what i expect from the convolution.
% From my knowledge of convolution, the algorithm works as a multiplier in Fourier space, therefore by dividing the Fourier transform of my output (convoluted image) by my input (img) I should get back the point spread function (Z - 2D Gaussian function) after the inverse Fourier transform is applied to this result by division.
% Code for attempted 2D deconvolution
Fimg = fft2(img,599,599);
% zero padding added to increase result to 599x599 array
FC = fft2(C);
R = FC/Fimg;
% I now get this error prompt: Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 2.551432e-22
iFR = ifft2(R);
I'm expecting iFR to be close to Z but I'm getting something completely different. It may be an approximation of Z with complex values but I can't seem to check it since I don't know how to plot a 3D complex matrix in matlab. So if anyone can tell me whether my answer is correct or incorrect and how to get this deconvolution to work? I'd be much appreciated.
R = FC/Fimg needs to be R = FC./Fimg; You need to do division element-wise.
Here are some Octave (version 3.6.2) plots of that deconvolved Gaussian.
% deconvolve in frequency domain
Fimg = fft2(img,599,599);
FC = fft2(C);
R = FC ./ Fimg;
r = ifft2(R);
% show deconvolved Gaussian
figure(1);
subplot(2,3,1), imshow(img), title('image');
subplot(2,3,2), imshow(Z), title('Gaussian');
subplot(2,3,3), imshow(C), title('image blurred by Gaussian');
subplot(2,3,4), mesh(X,Y,Z), title('initial Gaussian');
subplot(2,3,5), imagesc(real(r(1:300,1:300))), colormap gray, title('deconvolved Gaussian');
subplot(2,3,6), mesh(X,Y,real(r(1:300,1:300))), title('deconvolved Gaussian');
% show difference between Gaussian and deconvolved Gaussian
figure(2);
gdiff = Z - real(r(1:300,1:300));
imagesc(gdiff), colorbar, colormap gray, title('difference between initial Gaussian and deconvolved Guassian');

Resources