I have the following scenario:
Table is _etblpricelistprices
Columns are as follows:
iPriceListNameID iPricelistNameID iStockID fExclPrice
1 1 1 10
2 2 1 20
3 3 1 30
4 4 1 40
5 5 1 100
6 6 1 200
7 7 1 300
8 8 1 400
9 1 2 1000
10 2 2 2000
11 3 2 3000
12 4 2 4000
13 5 2 50
14 6 2 40
15 7 2 30
16 8 2 20
There are only two stock items here, but a lot more in the DB. The first column is the PK which auto-increments. The second column is the Pricelist. The pricelist is split as follows. (1-4) is current pricing and (5-8) is future pricing. the third column is the stock item's ID, and the fourth column, the pricing of the item.
I need a script to update this table to swap the future and current pricing per item. Please help
Observe, if you will, that swapping the iPricelistNameID values will achieve the same overall effect as swapping the fExclPrice values, and can be perfomed using a formula:
UPDATE _etblpricelistprices
SET
iPricelistNameID = CASE
WHEN iPricelistNameID > 4 THEN iPricelistNameID - 4
ELSE iPricelistNameID + 4
END
Related
I'm looking for fastest way to get unique values in matrix with Matlab! I have a matrix like this:
1 2
1 2
1 3
1 5
1 23
2 1
3 1
3 2
3 2
3 2
4 17
4 3
4 17
and need to get something like this:
1 2
1 3
1 5
1 23
2 1
3 1
3 2
4 3
4 17
Actually I need unique values by combination of columns in each row.
Have a look at matlabs unique() function with the argument 'rows'.
C = unique(A,'rows')
https://de.mathworks.com/help/matlab/ref/unique.html
I have a 48x202 matrix, where the first columns in the matix is an ID, and the rest of the columns is related vectors to the row ID in the first column.
The ID column is sorted in acending order, and multiple rows can have the same ID.
I want to summarize all IDs that are equal, meaning that i want to sum the rows in the matrix who has identical ID in the first column.
The resulting matrix should be 32x202, since there are only 32 IDs.
Any ideas?
I'd totally approach this with accumarray as well as unique. Like the previous answer, let A be your matrix. You would obtain your answer thusly:
[vals,~,id] = unique(A(:,1),'stable');
B = accumarray(id, (1:numel(id)).', [], #(x) {sum(A(x,2:end),1)});
out = [vals cell2mat(B)];
The first line of code produces vals which is a list of all unique IDs seen in the first column of A and id assigns a unique integer ID without any gaps from 1 up to as many unique IDs there are in the first column of A. The reason why you want to do this is for the next line of code.
How accumarray works is that you provide a set of keys and a set of values associated with each key. accumarray groups all values that belong to the same key and does something to all of the values. The keys in our case is the IDs given in the first column of A and the values are the actual row locations of the matrix A from 1 up to as many rows as A. Now, the default behaviour when collecting all of the values together is to sum all of the values that belong to the same key together, but we're going to do something a bit different. What we'll do is that for each unique ID seen in the first column of A, there will be a bunch of row locations that map to the same ID. We're going to use these row locations and will access the matrix A and sum all of the columns from the second column to the end. That's what the anonymous function in the fourth argument of accumarray is doing. accumarray traditionally should output a single value representing all of the values mapped to a key, but we get around this by outputting a single cell, where each cell entry is the row sum of the mapped columns.
Each element of B gives you the row sum for each corresponding unique value in vals and so the last line of code pieces these together - the unique value in vals with the corresponding row sum. I had to use cell2mat because this was a matrix of cells and I had to convert all of these into a numerical matrix to complete the task.
Here's an example seeing this in action. I'm going to do this for a smaller set of data:
>> rng(123);
>> A = [[1;1;1;2;2;2;2;3;3;4;4;5;6;7] randi(10, 14, 10)];
>> A
A =
1 7 4 3 4 5 1 10 3 2 3
1 3 8 7 5 7 9 9 4 9 6
1 3 2 1 9 9 7 4 6 4 9
2 6 2 5 3 6 8 1 7 6 4
2 8 6 5 5 7 1 4 2 6 8
2 5 6 5 10 6 6 4 2 6 2
2 10 7 5 6 7 6 8 4 1 7
3 7 9 4 7 7 2 10 7 10 9
3 5 8 5 2 9 2 4 9 10 10
4 4 7 9 9 1 7 8 6 3 1
4 4 8 10 7 8 4 6 9 3 5
5 8 4 6 6 3 7 7 4 6 3
6 5 4 7 4 2 6 2 4 10 5
7 1 3 2 4 6 4 4 4 10 6
The first column is our IDs, and the next columns are the data. Running the above code I just wrote, we get:
>> out
out =
1 13 14 11 18 21 17 23 13 15 18
2 29 21 20 24 26 21 17 15 19 21
3 12 17 9 9 16 4 14 16 20 19
4 8 15 19 16 9 11 14 15 6 6
5 8 4 6 6 3 7 7 4 6 3
6 5 4 7 4 2 6 2 4 10 5
7 1 3 2 4 6 4 4 4 10 6
If you double check each row, summing over all of the columns that match each of the column IDs matches up. For example, the first three rows map to the same ID, and we should sum up all of these rows and we get the corresponding sum. The second column is equal to 7+3+3=13, the third column is equal to 4+8+2=14, etc.
Another approach is to apply unique and then use bsxfun to build a matrix that multiplied by the non-ID part of the input matrix will give the result.
Let the input matrix be denoted as A. Then:
[u, ~, v] = unique(A(:,1));
result = [ u bsxfun(#eq, u, u(v).') * A(:,2:end) ];
Example: borrowing from #rayryeng's answer, let
A = [ 1 7 4 3 4 5 1 10 3 2 3
1 3 8 7 5 7 9 9 4 9 6
1 3 2 1 9 9 7 4 6 4 9
2 6 2 5 3 6 8 1 7 6 4
2 8 6 5 5 7 1 4 2 6 8
2 5 6 5 10 6 6 4 2 6 2
2 10 7 5 6 7 6 8 4 1 7
3 7 9 4 7 7 2 10 7 10 9
3 5 8 5 2 9 2 4 9 10 10
4 4 7 9 9 1 7 8 6 3 1
4 4 8 10 7 8 4 6 9 3 5
5 8 4 6 6 3 7 7 4 6 3
6 5 4 7 4 2 6 2 4 10 5
7 1 3 2 4 6 4 4 4 10 6 ];
Then the result is
result =
1 13 14 11 18 21 17 23 13 15 18
2 29 21 20 24 26 21 17 15 19 21
3 12 17 9 9 16 4 14 16 20 19
4 8 15 19 16 9 11 14 15 6 6
5 8 4 6 6 3 7 7 4 6 3
6 5 4 7 4 2 6 2 4 10 5
7 1 3 2 4 6 4 4 4 10 6
and the intermediate matrix created with bsxfun is
>> bsxfun(#eq, u, u(v).')
ans =
1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1
Pre-multiplying A by this matrix means that the first three rows of A are added to give the first row of the result; then the following four rows of A are added to give the second row of the result, etc.
You can find the unique row IDs with unique and then loop over all of those, summing the other columns: Let A be your matrix, then
rID = unique(A(:, 1));
B = zeros(numel(rID), size(A, 2));
for ii = 1:numel(rID)
B(ii, 1) = rID(ii);
B(ii, 2:end) = sum(A(A(:, 1) == rID(ii), 2:end), 1);
end
B contains your output.
Say I have a table of subtractions and divisions sorted by date:
tblFactors
dt sub divide
2014-07-01 1 1
2014-06-01 0 5
2014-05-01 2 1
2014-05-01 0 3
I have another table of values, sorted by date:
tblValues
dt val
2014-07-05 4
2014-06-15 5
2014-05-15 21
2014-04-14 31
2014-03-15 71
I need to perform some sequential calculations. For the first value in tblFactors, I need to subtract 1 from every val where tblValues.dt < '2014-07-01'.
Next, I need to process the second row in tblFactors. There is nothing to subtract. However, the divide = 5 means that I need to divide every val by 5 where tblValues.dt < '2014-06-01'. The tricky thing is that I need to do this on the modified val from the row before (divide 20 / 5, not 21 / 5).
Each row in tblFactors would process in this manner, giving a sequence like this:
tblFactors: Row 1 Row 2 Row 3 Row 4
Dt Original Val Subtract 1 Divide by 5 Subtract 2 Divide by 3
7/5/2014 4
6/15/2014 5 4
5/15/2014 21 20 4
4/14/2014 31 30 6 4
3/25/2014 71 70 14 12 4
This would leave me with:
qryValues
dt val
2014-07-05 4
2014-06-15 4
2014-05-15 4
2014-04-14 4
2014-03-15 4
Right now I'm doing vector multiplications over loops in R. I was wondering if there was a clever way to accomplish this in the native sql. I tried doing some aggregations but I've had limited success.
I have this dataframe:
df <- data.frame(subject = c(rep("one", 20), c(rep("two", 20))),
score1 = sample(1:3, 40, replace=T),
score2 = sample(1:6, 40, replace=T),
score3 = sample(1:3, 40, replace=T),
score4 = sample(1:4, 40, replace=T))
subject score1 score2 score3 score4
1 one 2 4 2 2
2 one 3 3 1 2
3 one 1 2 1 3
4 one 3 4 1 2
5 one 1 2 2 3
6 one 1 5 2 4
7 one 2 5 3 2
8 one 1 5 1 3
9 one 3 5 2 2
10 one 2 3 3 4
11 one 3 2 1 3
12 one 2 5 2 1
13 one 2 4 1 4
14 one 2 2 1 3
15 one 1 3 1 4
16 one 1 6 1 3
17 one 3 4 2 2
18 one 3 2 1 3
19 one 2 5 3 1
20 one 3 6 2 1
21 two 1 6 3 4
22 two 1 2 1 2
23 two 3 2 1 2
24 two 1 2 2 1
25 two 2 3 1 3
26 two 1 5 3 3
27 two 2 4 1 4
28 two 2 6 2 4
29 two 1 6 2 2
30 two 1 5 1 4
31 two 2 1 2 4
32 two 3 6 1 1
33 two 1 1 3 1
34 two 2 4 2 3
35 two 2 1 3 2
36 two 2 3 1 3
37 two 1 2 3 4
38 two 3 5 2 2
39 two 2 1 3 4
40 two 2 1 1 3
Note that the scores have different ranges of values. Score 1 ranges from 1-3, score 2 from -6, score 3 from 1-3, score 4 from 1-4
I'm trying to reshape data like this:
library(reshape2)
dfMelt <- melt(df, id.vars="subject")
acast(dfMelt, subject ~ value ~ variable)
Aggregation function missing: defaulting to length
, , score1
1 2 3 4 5 6
one 6 7 7 0 0 0
two 8 9 3 0 0 0
, , score2
1 2 3 4 5 6
one 0 5 3 4 6 2
two 5 4 2 2 3 4
, , score3
1 2 3 4 5 6
one 10 7 3 0 0 0
two 8 6 6 0 0 0
, , score4
1 2 3 4 5 6
one 3 6 7 4 0 0
two 3 5 5 7 0 0
Note that the output array includes scores as "0" if they are missing. Is there any way to stop these missing scores being outputted by acast?
In this case, you might do better sticking to base R's table feature. I'm not sure that you can have an irregular array like you are looking for.
For example:
> lapply(df[-1], function(x) table(df[[1]], x))
$score1
x
1 2 3
one 9 6 5
two 11 4 5
$score2
x
1 2 3 4 5 6
one 2 5 4 3 3 3
two 4 2 2 3 4 5
$score3
x
1 2 3
one 9 5 6
two 4 11 5
$score4
x
1 2 3 4
one 4 4 8 4
two 2 6 5 7
Or, using your "long" data:
with(dfMelt, by(dfMelt, variable,
FUN = function(x) table(x[["subject"]], x[["value"]])))
Since each "score" subset is going to have a different shape, you will not be able to preserve the array structure. One option is to use lists of two-dim arrays or data.frames. eg:
# your original acast call
res <- acast(dfMelt, subject ~ value ~ variable)
# remove any columns that are all zero
apply(res, 3, function(x) x[, apply(x, 2, sum)!=0] )
Which gives:
$score1
1 2 3
one 7 8 5
two 6 8 6
$score2
1 2 3 4 5 6
one 4 2 6 4 1 3
two 2 5 3 4 3 3
$score3
1 2 3
one 5 10 5
two 5 11 4
$score4
1 2 3 4
one 5 4 4 7
two 4 6 6 4
This issue is related to question I asked here. I have a table that looks like this:
Item Count
1 1
2 4
3 8
4 2
5 6
6 3
I need to group items that are, for example, less than 5 into a new group and the total of each groups should be at least 5. The result should look like this:
Item Group Count
1 1 1
2 1 4
3 2 8
4 3 2
5 4 6
6 3 3
How do I achieve this? Many thanks.
Why isn't this a correct result?
Item Group Count
1 1 1
2 2 4
3 3 8
4 4 2
5 5 6
6 1 3
Or this?
Item Group Count
1 1 1
2 2 4
3 3 8
4 4 2
5 5 6
6 6 3
Seems to me that you're trying to solve the answer 'how to group the items as to minimize the number of groups and maximize the number of items in each group, w/o exceeding the limit 5'. Which sounds a lot like the Knapsack problem. Perhaps a you should read the Celko's SQL Stumper: The Class Scheduling Problem and the solutions proposed. Others have also approached this problem, eg. And now for a completely inappropriate use of SQL Server. Heads up: this is no a trivial problem by any means. Any naive algorithm will die a slow death attempting to solve it on a 1M rows table...