How to get last 25 item of an array? - arrays

{1,2,3,4,5,6,7,8,9,10}
Let's say i need to get last 3 items in this array.
{8,9,10}
How can I do that with high performance approach when the array has millions of items?
SELECT *
FROM
unnest((select arraycolumn from table where id=1))
WITH ORDINALITY as t(a1, num)
ORDER BY t.num DESC
LIMIT 25;
This takes 8 seconds to get last 25 items from an array that has 16 million items. I think there should be a much faster way than that.

Try a slice:
select arraycolumn[array_upper(arraycolumn,1)-24 : array_upper(arraycolumn,1)]
from my_table
where id = 1

Related

Alternative solutions to an array search in PostgreSQL

I am not sure if my database design is good for this tricky case and I also ask for help how the query for this could look like.
I plan a query with the following table:
search_array | value | id
-----------------------+-------+----
{XYa,YZb,WQb} | b | 1
{XYa,YZb,WQb,RSc,QZa} | a | 2
{XYc,YZa} | c | 3
{XYb} | a | 4
{RSa} | c | 5
There are 5 main elements in the search_array: XY, YZ, WQ, RS, QZ and 3 Values: a, b, c that are concardinated to each element.
Each row has also one value: a, b or c.
My aim is to find all rows that fit to a specific row in this sense: At first it should be checked if they have any same main elements in their search_arrays (yellow marked in the example).
As example:
Row id 4 an row id 5 wouldnt match because XY != RS.
Row id 1, 2 and 3 would match two times because they have all XY and YZ.
Row id 1 and 2 would even match three times because they have also WQ in common.
And second: if there is a Main Element match it should be 'crosschecked' if the lowercase letters after the Main Elements fit to the value of the other row.
As example: The only match for Row id 1 in the table would be Row id 4 because they both search for XY and the low letters after the elements match each value of the two rows.
Another match would be ROW id 2 and 5 with RS and search c to value c and search a to value a (green and orange marked).
My idea was to cut the search_array elements in the query in two parts with the RIGHT and LEFT command for strings. But I dont know how to combine the subqueries for this search.
Or would be a complete other solution faster? Like splitting the search array into another table with the columns 'foregin key' to the maintable, 'main element' and 'searched_value'. I am not sure if this is the best solution because the program would all the time switch to the main table to find two rows out of 3 million rows to compare their searched_values to the values?
Thank you very much for your answers and your time!
You'll have to represent the data in a normalized fashion. I'll do it in a WITH clause, but it would be better to store the data in this fashion to begin with.
WITH unravel AS (
SELECT t.id, t.value,
substr(u.val, 1, 2) AS arr_main,
substr(u.val, 3, 1) AS arr_val
FROM mytable AS t
CROSS JOIN LATERAL unnest(t.search_array) AS u(val)
)
SELECT a.id AS first_id,
a.value AS first_value,
b.id AS second_id,
b.value AS second_value,
a.arr_main AS main_element
FROM unravel AS a
JOIN unravel AS b
ON a.arr_main = b.arr_main
AND a.arr_val = b.value
AND b.arr_val = a.value;

Fill secondly data from Q KDB+

I have a csv file with some high frequency stock price data, and I'd like to get a secondly price data from the table.
In each file, there are columns named date, time, symbol, price, volume, and etc.
There are some seconds with no trading so there are missing data in some seconds.
I'm wondering how could I fill the missing data in Q to get the secondly data from 9:30 to 16:00 in full? If there is missing price, just use the recently price as its price in that second.
I'm considering to write some loop, but I don't know how to exactly to that.
Simplifying a little, I'll assume you have some random timestamps in your dataset like this:
time price
--------------------------------------
2015.01.20D22:42:34.776607000 7
2015.01.20D22:42:34.886607000 3
2015.01.20D22:42:36.776607000 4
2015.01.20D22:42:37.776607000 8
2015.01.20D22:42:37.886607000 7
2015.01.20D22:42:39.776607000 9
2015.01.20D22:42:40.776607000 4
2015.01.20D22:42:41.776607000 9
so there are some missing seconds there. I'm going to call this table t. So if you do a by-second type of query, obviously the seconds that are missing are still missing:
q)select max price by time.second from t
second | price
--------| -----
22:42:34| 7
22:42:36| 4
22:42:37| 8
22:42:39| 9
22:42:40| 4
22:42:41| 9
To get missing seconds, you have to join a list of nulls. In this case we know the data goes from 22:42:34 to 22:42:41, but in reality you'll have to find the min/max time and use that to create a temporary "null" table to join against:
q)([] second:22:42:34 + til 1+`int$22:42:41-22:42:34 ; price:(1+`int$22:42:41-22:42:34)#0N)
second price
--------------
22:42:34
22:42:35
22:42:36
22:42:37
22:42:38
22:42:39
22:42:40
22:42:41
Then left join:
q)([] second:22:42:34 + til 1+`int$22:42:41-22:42:34 ; price:(1+`int$22:42:41-22:42:34)#0N) lj select max price by time.second from t
second price
--------------
22:42:34 7
22:42:35
22:42:36 4
22:42:37 8
22:42:38
22:42:39 9
22:42:40 4
22:42:41 9
You can use fills or whatever your favourite filling heuristic is after that.
q)fills `second xasc asc ([] second:22:42:34 + til 1+`int$22:42:41-22:42:34 ; price:(1+`int$22:42:41-22:42:34)#0N) lj select max price by time.second from t
second price
--------------
22:42:34 7
22:42:35 7
22:42:36 4
22:42:37 8
22:42:38 8
22:42:39 9
22:42:40 4
22:42:41 9
(Note the sort on second before fills!)
By the way for larger tables this will be much faster than a loop. Loops in q are generally a bad idea.
EDIT
You could use a comma join too, both tables need to be keyed on the second column
t,t1
(where t1 is the null-filled table keyed on second)
I haven't tested it, but I suspect it would be slightly faster than the lj version.
Using aj which is one of the most powerful features of KDB:
q)data
sym time price size
----------------------------
MS 10:24:04 93.35974 8
MS 10:10:47 4.586986 1
APPL 10:50:23 0.7831685 1
GOOG 10:19:52 49.17305 0
in-memory table needs to be sym,time sorted with g# attribute applied to sym column
q)data:update `g#sym from `sym`time xasc data
q)meta trade
c | t f a
-----| -----
sym | s g
time | v
price| f
size | j
Creating a rack table intervalized per second per sym :
q)rack: `sym`time xasc (select distinct sym from data) cross ([] time:{x[0]+til `int$x[1]-x[0]}(min;max)#\:data`time)
Using aj to join the data :
q)aj[`sym`time; rack; data]

In SSRS, how can I add a row to aggregate all the rows that don't match a filter?

I'm working on a report that shows transactions grouped by type.
Type Total income
------- --------------
A 575
B 244
C 128
D 45
E 5
F 3
Total 1000
I only want to provide details for transaction types that represent more than 10% of the total income (i.e. A-C). I'm able to do this by applying a filter to the group:
Type Total income
------- --------------
A 575
B 244
C 128
Total 1000
What I want to display is a single row just above the total row that has a total for all the types that have been filtered out (i.e. the sum of D-F):
Type Total income
------- --------------
A 575
B 244
C 128
Other 53
Total 1000
Is this even possible? I've tried using running totals and conditionally hidden rows within the group. I've tried Iif inside Sum. Nothing quite seems to do what I need and I'm butting up against scope issues (e.g. "the value expression has a nested aggregate that specifies a dataset scope").
If anyone can give me any pointers, I'd be really grateful.
EDIT: Should have specified, but at present the dataset actually returns individual transactions:
ID Type Amount
---- ------ --------
1 A 4
2 A 2
3 B 6
4 A 5
5 B 5
The grouping is done using a row group in the tablix.
One solution is to solve that in the SQL source of your dataset instead of inside SSRS:
SELECT
CASE
WHEN CAST([Total income] AS FLOAT) / SUM([Total income]) OVER (PARTITION BY 1) >= 0.10 THEN [Type]
ELSE 'Other'
END AS [Type]
, [Total income]
FROM Source_Table
See also SQL Fiddle
Try to solve this in SQL, see SQL Fiddle.
SELECT I.*
,(
CASE
WHEN I.TotalIncome >= (SELECT Sum(I2.TotalIncome) / 10 FROM Income I2) THEN 10
ELSE 1
END
) AS TotalIncomePercent
FROM Income I
After this, create two sum groups.
SUM(TotalIncome * TotalIncomePercent) / 10
SUM(TotalIncome * TotalIncomePercent)
Second approach may be to use calculated column in SSRS. Try to create a calculated column with above case expression. If it allows you to create it, you may use it in the same way as SQL approach.
1) To show income greater than 10% use row visibility condition like
=iif(reportitems!total_income.value/10<= I.totalincome,true,false)
here reportitems!total_income.value is total of all income textbox value which will be total value of detail group.
and I.totalincome is current field value.
2)add one more row to outside of detail group to achieve other income and use expression as
= reportitems!total_income.value-sum(iif(reportitems!total_income.value/10<= I.totalincome,I.totalincome,nothing))

Linq - Limit list to 1 row per unique values based on value (minimum) of single field

I have a stored procedure (I cannot edit) that I am calling via linq.
The stored procedure returns values (more complex but important data below):
Customer Stock Item Date Price Priority Qty
--------------------------------------------------------
CUST1 TAP 01-04-2012 £30 30 1 - 30
CUST1 TAP 05-04-2012 £33 30 1 - 30
CUST1 TAP 01-04-2012 £29 20 31 - 99
CUST1 TAP 01-04-2012 £28 10 1 - 30
I am trying to limit this list to rows which have unique Dates and unique quantities in LINQ.
I want to remove items with the HIGHER priority leaving rows with unique dates and qty's.
I have tried several group by's using Max and order by's but have not been able to get a result.
Is there any way to do this via linq?
EDIT:
Managed to convert brad-rem's answer into VB.net.
Syntax below if anyone needs it:
returnlist = (From p In returnlist
Order By p.Qty Ascending, p.Priority
Group By AllGrp = p.Date, p.Qty Into g = Group
Select g.First).ToList
How about the following. It groups by Date and Qty and orders it so that the lower priorities come first. Then, it just selects the first item from each group, which are all the lower priority items:
var result = from d in dbData
orderby d.Priority
group d by new
{
d.Date,
d.Qty
} into group1
select group1.First();

VBA two dimensional arrays connecting to a database

I am working with vba in excel and a database in access. The access database is a table that contains 3 columns; OrderIDs which is a column of numbers saying what order the particular item was in, OrderDescription which is a column that contains the description of the item, and Item # which is a column that gives a number to each particular item (if the item is the same as another, they both are the same item).
I need to build a 2-dimensional array in excel using VBA holding which items were purchased in which orders. The rows will be the Order ID and the columns will be the Item ID. The elements of this array will contain an indicator (like True or a “1”) that indicates that this order contains certain items. For example, row 6 (representing order ID 6) will have “True” in columns 1, 5, and 26 if that order purchased item IDs 1, 5, and 26. All other columns for that order will be blank.
In order to do this, i think I will have to determine the max order number (39) and the max item number(33). This information is available in the database which I can connect to using a .connection and .recordset. Some order numbers and some item numbers may not appear.
Note also that this will likely be a sparse array (not many entries) as most orders contain only a few items. We do not care how many of an item a customer purchased, only that the item was purchased on this order.
MY QUESTION is how can I set up this array? I tried a loop that would assign the values of the order numbers to an array and the items numbers to an array and then dimensioning the array to those sizes, but it wont work.
is there a way to make an element of an array return a value of True if it exists?
Thanks for your help
It seems to me that the best bet may be a cross tab query run on an access connection. You can create your array with the ADO method GetRows : http://www.w3schools.com/ado/met_rs_getrows.asp.
TRANSFORM Nz([Item #],0)>0 AS Val
SELECT OrderNo
FROM Table
GROUP BY OrderNo
PIVOT [Item #]
With a Counter table containing integers from 1 to maximum number of items in a column (field) Num.
TRANSFORM First(q.Val) AS FirstOfVal
SELECT q.OrderNo
FROM (SELECT t.OrderNo, c.Num, Nz([Item #],0)>0 AS Val
FROM TableX t RIGHT JOIN [Counter] c ON t.[Item #] = c.Num
WHERE c.Num<12) q
GROUP BY q.OrderNo
PIVOT q.Num
Output:
OrderNo 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0
1 -1 -1 -1 -1
2 -1 -1 -1 -1

Resources