How to extract specific parts of an equation in MuPAD or Maple - symbolic-math

I have MuPAD and Maple and I would like to do the following with one of those softwares:
I have an equation containing several cosines with different amplitudes and different arguments as depictetd in the picture below in the first (blue) row.
I want to extract only those cosines which contain at least the argument "+at-bt" (so "+at-bt+alpha" is OK, too) - see second (blue row).
I want to display the summ of amplitudes of this specific cosines - see third (red) row.
The second picture shows a real example.

Let's say that your long expression is named expr. Then do this
TypeTools:-AddType(
MyCos,
cos(satisfies(x-> x::`+` and {a*t, -b*t} subset {op(x)} or x = b*t-a*t))
):
subex:= select(T-> T::MyCos or T::`*` and membertype(MyCos, {op(T)}), expr);
Now subex is your desired subexpression. If you want to add up the coefficients, then simply do eval(subex, cos= 1).
Note that this will not find partially factored arguments like (a-b)*t+alpha. If you need to find these, let me know.

Related

How do I search for most similar sequence in Excel?

I'm hoping to search an excel column for the sequence in it most similar to a sequence I enter.
For instance, in the following example, the sequence I provide is: 1, 2.5, 3.5, 2.5, 1. It's depicted on the following graph as black.
In the column I'm searching, there are a few sequences. The most similar one to mine is colored blue. It goes: 1, 2, 3, 2, 1.
Graph
Do any of you know an excel formula, or series of formulas and steps, that would allow Excel to determine this -- so that when I enter the black sequence, for instance, it will match it with the blue sequence as the most similar one?
Thanks tothis Stack overflow answer, I already know how to search a set of numbers for an exact sequence by using the following formula:
=MATCH([Criteria 1]&[Criteria 2],[Data 1st val]:[Data last val]&[Data 2nd val]:[Data last + 1 val],0)
For instance, if I have the following numbers: 1, 3, 5, 1, 4, and I am hoping to find the sequence, 1, 4, this formula will direct me towards it in that set of numbers.
I ALSO already know how to find the closest match to a number I enter, using this formula (which will make more sense if you look in the example image below): =INDEX($A$1:$A$10,MATCH(MIN(ABS(C1-B1:B10)),ABS(C1-$B$1:$B$10),0))
Example
When I press control+shift+enter, this formula will produce the number 4, indicating row 4, because the number I entered in C1, which was 39, is closest to the number 40, which is located in the 4th row.
So I have both the components -- finding exact sequences, and finding the closest number -- but now the question is, how do I combine these two formulas to show me the closest sequence of numbers, the one which would look most similar if drawn on a graph like in my first example with the blue and black line?
And bonus points if you can help find not only the closest sequence but the closest sequences in order of most similar to least similar.
And once again, I don't need this to be rolled into one formula; I am happy to go through a couple steps and different formulas manually to arrive at the answer.
And if you think this would be better solved in some other way, please let me know! But I do not have any coding experience so I figured Excel would be my best bet.
Thank you so much!!!
Not sure how you exactly have set this up, but if I visualize your graph in a table you could use the below (if one has Microsoft365):
Formula in H2:
=INDEX(SORTBY(B2:F4,MMULT(ABS(B2:F4-B1:F1),SEQUENCE(5,,,0))),1)
With all your data in a single column, below you can find an example for if you'd have sequences of 5.
Formula in C2:
=TRANSPOSE(INDEX(SORTBY(INDEX(A2:A16,SEQUENCE(11,5)-ROUNDDOWN(SEQUENCE(11,5,0,0.2),0)*4),MMULT(ABS(INDEX(A2:A16,SEQUENCE(11,5)-ROUNDDOWN(SEQUENCE(11,5,0,0.2),0)*4)-TRANSPOSE(B2:B6)),SEQUENCE(5,,,0))),1))
If you would want to make this applicable for your dataset from A1:A500 with sequence of 10 numbers:
=TRANSPOSE(INDEX(SORTBY(INDEX(A1:A500,SEQUENCE(COUNT(A1:A500)-9,10)-ROUNDDOWN(SEQUENCE(COUNT(A1:A500)-9,10,0,0.1),0)*9),MMULT(ABS(INDEX(A1:A500,SEQUENCE(COUNT(A1:A500)-9,10)-ROUNDDOWN(SEQUENCE(COUNT(A1:A500)-9,10,0,0.1),0)*9)-TRANSPOSE(B1:B10)),SEQUENCE(10,,,0))),1))
And if will be even better if you had acces to LET() and it will be a piece of cake to just change the range reference:
=LET(X,A2:A500,Y,INDEX(X,SEQUENCE(COUNT(X)-9,10)-ROUNDDOWN(SEQUENCE(COUNT(X)-9,10,0,0.1),0)*9),TRANSPOSE(INDEX(SORTBY(Y,MMULT(ABS(Y-TRANSPOSE(B2:B11)),SEQUENCE(10,,,0))),1)))
EDIT2:
To make it more dynamic you can use:
=LET(W,1,X,A2:A500,Y,11,Z,INDEX(X,SEQUENCE(COUNT(X)-(Y-1),Y)-ROUNDDOWN(SEQUENCE(COUNT(X)-(Y-1),Y,0,1/Y),0)*(Y-1)),TRANSPOSE(INDEX(SORTBY(Z,MMULT(ABS(Z-TRANSPOSE(B2:INDEX(B:B,Y+1))),SEQUENCE(Y,,,0))),W)))
Where "W" is the nth closest match and where "Y" is the length of the sequence, 11 in the example.
My approach would be to calculate a match-value between each color and the input values, like the sum of the differences for each point.
The formula for this is:
=SUM(IF([inputrange]<>"",ABS([inputrange]-[colorrange]),0))
Where [inputrange] is the range of your input (indicated red in the picture below, $C$6:$G$6) and [colorrange] is the range of that color (indicated blue, C2:G2).
The color with the lowest difference is the match:
=VLOOKUP(MIN([matchvalues],[rangeofmatchandcolors],2,0)
Where [matchvalues] is the range of match values (indicated blue in the picture below, Cells A2:A4) and [rangeofmatchandcolors] is both the match values as well as the colors (indicated red, A2:B4)

Multiple IF QUARTILEs returning wrong values

I am using a nested IF statement within a Quartile wrapper, and it only kind of works, for the most part because it's returning values that are slightly off from what I would have expected if I calculate the range of values manually.
I've looked around but most of the posts and research is about designing the fomrula, I haven't come across anything compelling in terms of this odd behaviour I'm observing.
My formula (ctrl+shift enter as it's an array): =QUARTILE(IF(((F2:$F$10=$W$4)($Q$2:$Q$10=$W$3))($E$2:$E$10=W$2),IF($O$2:$O$10<>"",$O$2:$O$10)),1)
The full dataset:
0.868997877*
0.99480118
0.867040346*
0.914032128*
0.988150438
0.981207615*
0.986629288
0.984750004*
0.988983643*
*The formula has 3 AND conditions that need to be met and should return range:
0.868997877
0.867040346
0.914032128
0.981207615
0.984750004
0.988983643
At which 25% is calculated based on the range.
If I take the output from the formula, 25%-ile (QUARTILE,1) is 0.8803, but if I calculate it manually based on the data points right above, it comes out to 0.8685 and I can't see why.
I feel it's because the IF statements identifies slight off range but the values that meet the IF statements are different rows or something.
If you look at the table here you can see that there is more than one way of estimating quartile (or other percentile) from a sample and Excel has two. The one you are doing by hand must be like Quartile.exc and the one you are using in the formula is like Quartile.inc
Basically both formulas work out the rank of the quartile value. If it isn't an integer it interpolates (e.g. if it was 1.5, that means the quartile lies half way between the first and second numbers in ascending order). You might think that there wouldn't be much difference, but for small samples there is a massive difference:
Quartile.exc Rank=(N+1)/4
Quartile.inc Rank=(N+3)/4
Here's how it would look with your data

Differences in Differentiation Implementations in MATLAB

I'm trying to find the (numerical) curvature at specific points. I have data stored in an array, and I essentially want to find the local curvature at every separate point. I've searched around, and found three different implementations for this in MATLAB: diff, gradient, and del2.
If my array's name is arr I have tried the following implementations:
curvature = diff(diff(arr));
curvature = diff(arr,2);
curvature = gradient(gradient(arr));
curvature = del2(arr);
The first two seem to output the same values. This makes sense, because they're essentially the same implementation. However, the gradient and del2 implementations give different values from each other and from diff.
I can't figure out from the documentation precisely how the implementations work. My guess is that some of them are some type of two-sided derivative, and some of them are not two-sided derivatives. Another thing that confuses me is that my current implementations use only the data from arr. arr is my y-axis data, the x-axis essentially being time. Do these functions default to a stepsize of 1 or something like that?
If it helps, I want an implementation that takes the curvature at the current point using only previous array elements. For context, my data is such that a curvature calculation based on data in the future of the current point wouldn't be useful for my purposes.
tl;dr I need a rigorous curvature at a point implementation that uses only data to the left of the point.
Edit: I kind of better understand what's going on based on this, thanks to the answers below. This is what I'm referring to:
gradient calculates the central difference for interior data points.
For example, consider a matrix with unit-spaced data, A, that has
horizontal gradient G = gradient(A). The interior gradient values,
G(:,j), are
G(:,j) = 0.5*(A(:,j+1) - A(:,j-1)); The subscript j varies between 2
and N-1, with N = size(A,2).
Even so, I still want to know how to do a "lefthand" computation.
diff is simply the difference between two adjacent elements in arr, which is exactly why you lose 1 element for using diff once. For example, 10 elements in an array only have 9 differences.
gradient and del2 are for derivatives. Of course, you can use diff to approximate derivative by dividing the difference by the steps. Usually the step is equally-spaced, but it does not have to be. This answers your question why x is not used in the calculation. I mean, it's okay that your x is not uniform-spaced.
So, why gradient gives us an array with the same length of the original array? It is clearly explained in the manual how the boundary is handled,
The gradient values along the edges of the matrix are calculated with single->sided differences, so that
G(:,1) = A(:,2) - A(:,1);
G(:,N) = A(:,N) - A(:,N-1);
Double-gradient and del2 are not necessarily the same, although they are highly correlated. It's all because how you calculate/approximate the 2nd-oder derivatives. The difference is, the former approximates the 2nd derivative by doing 1st derivative twice and the latter directly approximates the 2nd derivative. Please refer to the help manual, the formula are documented.
Okay, do you really want curvature or the 2nd derivative for each point on arr? They are very different. https://en.wikipedia.org/wiki/Curvature#Precise_definition
You can use diff to get the 2nd derivative from the left. Since diff takes the difference from right to left, e.g. x(2)-x(1), you can first flip x from left to right, then use diff. Some codes like,
x=fliplr(x)
first=x./h
second=diff(first)./h
where h is the space between x. Notice I use ./, which idicates that h can be an array (i.e. non-uniform spaced).

Antipole Clustering

I made a photo mosaic script (PHP). This script has one picture and changes it to a photo buildup of little pictures. From a distance it looks like the real picture, when you move closer you see it are all little pictures. I take a square of a fixed number of pixels and determine the average color of that square. Then I compare this with my database which contains the average color of a couple thousand of pictures. I determine the color distance with all available images. But to run this script fully it takes a couple of minutes.
The bottleneck is matching the best picture with a part of the main picture. I have been searching online how to reduce this and came a cross “Antipole Clustering.” Of course I tried to find some information on how to use this method myself but I can’t seem to figure out what to do.
There are two steps. 1. Database acquisition and 2. Photomosaic creation.
Let’s start with step one, when this is all clear. Maybe I understand step 2 myself.
Step 1:
partition each image of the database into 9 equal rectangles arranged in a 3x3 grid
compute the RGB mean values for each rectangle
construct a vector x composed by 27 components (three RGB components for each rectangle)
x is the feature vector of the image in the data structure
Well, point 1 and 2 are easy but what should I do at point 3. How do I compose a vector X out of the 27 components (9 * R mean, G mean, B mean.)
And when I succeed to compose the vector, what is the next step I should do with this vector.
Peter
Here is how I think the feature vector is computed:
You have 3 x 3 = 9 rectangles.
Each pixel is essentially 3 numbers, 1 for each of the Red, Green, and Blue color channels.
For each rectangle you compute the mean for the red, green, and blue colors for all the pixels in that rectangle. This gives you 3 numbers for each rectangle.
In total, you have 9 (rectangles) x 3 (mean for R, G, B) = 27 numbers.
Simply concatenate these 27 numbers into a single 27 by 1 (often written as 27 x 1) vector. That is 27 numbers grouped together. This vector of 27 numbers is the feature vector X that represents the color statistic of your photo. In the code, if you are using C++, this will probably be an array of 27 number or perhaps even an instance of the (aptly named) vector class. You can think of this feature vector as some form of "summary" of what the color in the photo is like. Roughly, things look like this: [R1, G1, B1, R2, G2, B2, ..., R9, G9, B9] where R1 is the mean/average of red pixels in the first rectangle and so on.
I believe step 2 involves some form of comparing these feature vectors so that those with similar feature vectors (and hence similar color) will be placed together. Comparison will likely involve the use of the Euclidean distance (see here), or some other metric, to compare how similar the feature vectors (and hence the photos' color) are to each other.
Lastly, as Anony-Mousse suggested, converting your pixels from RGB to HSB/HSV color would be preferable. If you use OpenCV or have access to it, this is simply a one liner code. Otherwise wiki HSV etc. will give your the math formula to perform the conversion.
Hope this helps.
Instead of using RGB, you might want to use HSB space. It gives better results for a wide variety of use cases. Put more weight on Hue to get better color matches for photos, or to brightness when composing high-contrast images (logos etc.)
I have never heard of antipole clustering. But the obvious next step would be to put all the images you have into a large index. Say, an R-Tree. Maybe bulk-load it via STR. Then you can quickly find matches.
Maybe it means vector quantization (vq). In vq the image isn't subdivide in rectangles but in density areas. Then you can take a mean point of this cluster. First off you need to take all colors and pixels separate and transfer it to a vector with XY coordinate. Then you can use a density clustering like voronoi cells and get the mean point. This point can you compare with other pictures in the database. Read here about VQ: http://www.gamasutra.com/view/feature/3090/image_compression_with_vector_.php.
How to plot vector from adjacent pixel:
d(x) = I(x+1,y) - I(x,y)
d(y) = I(x,y+1) - I(x,y)
Here's another link: http://www.leptonica.com/color-quantization.html.
Update: When you have already computed the mean color of your thumbnail you can proceed and sort all the means color in a rgb map and using the formula I give to you to compute the vector x. Now that you have a vector of all your thumbnails you can use the antipole tree to search for a thumbnail. This is possbile because the antipole tree is something like a kd-tree and subdivide the 2d space. Read here about antipole tree: http://matt.eifelle.com/2012/01/17/qtmosaic-0-2-faster-mosaics/. Maybe you can ask the author and download the sourcecode?

KD-Trees and missing values (vector comparison)

I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored.
I have a couple of obvious solutions, but I'm not sure either one seem very good:
Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z.
Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product.
This has to be a common problem, any suggestions? Thanks for the help.
Your first solution might be fastest for queries (since the tree-building doesn't consider splits in directions that you don't care about), but it would definitely use a lot of memory. And if you have to rebuild the trees repeatedly, it could get slow.
The second option looks very slow unless you only have a few points. And if that's the case, you probably didn't need a kd-tree in the first place :)
I think the best solution involves getting your hands dirty in the code that you're working with. Presumably the nearest-neighbor search computes the distance between the point in the tree leaf and the query vector; you should be able to modify this to handle the case where the point and the query vector are different sizes. E.g. if the points in the tree are given in 3D, but your query vector is only length 2, then the "distance" between the point (p0, p1, p2) and the query vector (x0, x1) would be
sqrt( (p0-x0)^2 + (p1-x1)^2 )
I didn't dig into the java code that you linked to, but I can try to find exactly where the change would need to go if you need help.
-Chris
PS - you might not need the sqrt in the equation above, since distance squared is usually equivalent.
EDIT
Sorry, didn't realize it would be so obvious in the source code. You should use this version of the neighbor function:
nearest(double [] key, int n, Checker<T> checker)
And implement your own Checker class; see their EuclideanDistance.java to see the Euclidean version. You may also need to comment out any KeySizeException that the query code throws, since you know that you can handle differently sized keys.
Your second option looks like a reasonable solution for what you want.
You could also populate the missing dimensions with the most important( or average or whatever you think it should be) values if there are any.
You could try using the existing KD tree -- by taking both branches when the split is for a dimension that is not supplied by the source vector. This should take less time than doing a brute force search, and might be less trouble than trying to maintain a bunch of specialized trees for dimension subsets.
You would need to adapt your N-closest algorithm (without more info I can't advise you on that...), and for distance you would use the sum of the squares of only those elements supplied by the source vector.
Here's what I ended up doing: When a user didn't specify a value (when their query vector lacked a dimension), I I simply adjusted my matching range (in the API) to something huge so that I match any value.

Resources