Microsoft SQL Server Paging - sql-server

There are a number of sql server paging questions on stackoverflow and many of them talk about using ROW_NUMBER() OVER (ORDER BY ...) AND CTE. Once you get into the hundreds of thousands of rows and start adding sorting on non-primary key values and adding custom WHERE clauses, these methods become very inneficient. I have a dataset of several million rows I am trying to page through with custom sorting and filtering, but I am getting poor performance, even with indexes on all the fields that I sort by and filter by. I even went as far as to include my SELECT columns in each of the indexes, but this barely helped and severely bloated my database.
I noticed the stackoverflow paging only takes about 500 milliseconds no matter what sorting criteria or page number you click on. Anyone know how to make paging work efficiently in SQL Server 2008 with millions of rows? This would include getting the total rows as efficiently as possible.
My current query has the exact same logic as this stackoverflow question about paging:
Best paging solution using SQL Server 2005?

Anyone know how to make paging work efficiently in SQL Server 2008 with millions of rows?
If you want accurate perfect paging, there is no substitute for building an index key (position row number) for each record. However, there are alternatives.
(1) total number of pages (records)
You can use an approximation from sysindexes.rows (almost instant) assuming the rate of change is small.
You can use triggers to maintain a completely accurate, to the second, table row count
(2) paging
(a)
You can show page jumps within say the next five pages to either side of a record. These need to scan at most {page size} x 5 on each side. If your underlying query lends itself to travelling along the sort order quickly, this should not be slow. So given a record X, you can go to the previous page using (assuming sort order is a asc, b desc
select top(#pagesize) t.*
from tbl x
inner join tbl t on (t.a = x.a and t.b > x.b) OR
(t.a < a.x)
where x.id = #X
order by t.a asc, t.b desc
(i.e. the last {page size} of records prior to X)
To go five pages back, you increase it to TOP(#pagesize*5) then further TOP(#pagesize) from that subquery.
Downside: This option requires that you cannot directly jump to a particular location, your options are only FIRST (easy), LAST (easy), NEXT/PRIOR, <5 pages either side
(b)
If the paging is always going to be quite specific and predictable, maintain an INDEXED view or trigger-updated table that does not contain gaps in the row number. This may be an option if the tables normally only see updates at one end of the spectrum, with gaps from deletes easily filled quickly by shifting not-so-many records.
This approach gives you a rowcount (last row) and also direct access to any page.

try this, let say you have country table as below:
DECLARE #pageIndex INT=0;
DECLARE #pageSize INT= 10;
DECLARE #sortByColumn NVARCHAR(200)='Code';
DECLARE #sortByDesc BIT=0;
;WITH tbl AS (
SELECT COUNT(id) OVER() [RowTotal], c.Id, c.Code, c.Name
FROM dbo.[Country] c
ORDER BY
CASE WHEN #sortByColumn='Code' AND #sortByDesc=0 THEN c.Code END ASC,
CASE WHEN #sortByColumn='Code' AND #sortByDesc<>0 THEN c.Code END DESC,
CASE WHEN #sortByColumn='Name' AND #sortByDesc=0 THEN c.Name END ASC,
CASE WHEN #sortByColumn='Name' AND #sortByDesc<>0 THEN c.Name END DESC,
,c.Name ASC --DEFAULT SORTING ORDER
OFFSET #PageIndex*#pageSize ROWS
FETCH NEXT #pageSize ROWS ONLY
) SELECT (#PageIndex*#pageSize)+(ROW_NUMBER() OVER(ORDER BY Id))[RowNo],* from tbl;

Related

group by on clustering key is not reading from metadata

I have defined cluster key on one of the column "time periods" , when i use where clause it operates on metadata that I can see in history profile of below query
select count(*) from table where time_period = 'Jan 2021'
but when i use group by to know count of each month , it scan all the partition.
select time_period , count(*) from table group by time_period
Why the second query is not the metadata operation ..?
select time_period , count(*) from table group by time_period;
is a full table scan.
select count(*) from table where time_period = 'Jan 2021'
is a full scan on partitions with the time_period equal to one value, so the meta data is searched to find the matching partitions, thus the pruning.
if you table has values from 'Jan 2020' to 'Jan 2021' and assuming those are dates not strings (which would be very bad for performance), and assuming you data is clustered on time_period (or naturally inserting in "months") then
select time_period, count(*)
from table
where time_period >= '2021-06-01'
group by 1 order by 1;
should only read ~50% of your partitions, as the assumed order of the data, means only half the tables need to be read.
Answering the "meta-data" vs "scanning" question. This is based on years of working with query optimization, and is "very well educated speculation".
There is big difference between "COUNT()" and "COUNT() ... GROUP BY". The latter is much more complex and handles much more complex queries.
Optimizers evolve over time to handle special cases, but they start out focusing on more common types of queries.
The non-GROUP query against a non-keyed but well clustered table with use a scan. It's a specialized optimization, meaningful, optimization for a special case.
But the same specialization is not present in the GROUP BY, which addresses a much broader class of queries, with GROUP BY and WHERE clauses for multiple non-cluser-key columns.
The COUNT() GROUP BY would need to add a special check for this particular query form; once anything else is added, the meta-data would not be sufficient.
So no specialized optimization for this specific case in COUNT(), GROUP BY

Select a random row from Oracle DB in a performant way

Using :
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
I am trying to fetch a random row. As suggested in other stackoverflow questions, I used DBMS_RANDOM.VALUE like this -
SELECT column FROM
( SELECT column
FROM table
WHERE COLUMN_VALUE = 'Y' -- value of COLUMN_VALUE
ORDER BY dbms_random.value
)
WHERE rownum <= 1
But this query isn't performant when the number of requests increase.
So I am looking for an alternative.
SAMPLE wouldn't work for me because the sample picked up through the clause wouldn't have a dataset that matches my WHERE clause. The query looked like this -
SELECT column FROM table SAMPLE(1) WHERE COLUMN_VALUE = 'Y'
Because the SAMPLE is applied before my WHERE clause, most times this returns no data.
P.S: I am ok to move some part of the logic to application layer (though i am definitely not looking for answers that suggest loading everything to memory)
The performance problems consist of two aspects:
selecting the data with column_value = 'Y' and
sorting this subset to get a random record
You didn't say if the subset of your table with column_value = 'Y' is a large or small. This is important and will drive your strategy.
If there are lots of records with column_value = 'Y' use the SAMPLE to limit the rows to by sorted.
You are right, this could lead to empty result - in this case repeat the query (you may additionally add a logic that increases the sample percent to avoid lot of repeating). This will boost performance while you sort ony a sample of the data
select id from (
select id from tt SAMPLE(1) where column_value = 'Y' order by dbms_random.value )
where rownum <= 1;
If there are only few records with column_value = 'Y' define an index on this column (or a separate partition) - this enables a effiective access to the records. Use the order by dbms_random.value approach. Sort will not degradate performance for small number of rows.
select id from (
select id from tt where column_value = 'Y' order by dbms_random.value )
where rownum <= 1;
Basically both approaches keep the sorted rows in small size. The first approach perform a table access comparable with FULL TABLE SCAN, the second performs INDEX ACCESS for the selected column_value.

PostgreSQL Inserted rows differ from select

I have a problem with an INSERT in PostgreSQL. I have this query:
INSERT INTO track_segments(tid, gdid1, gdid2, distance, speed)
SELECT * FROM (
SELECT DISTINCT ON (pga.gdid)
pga.tid as ntid,
pga.gdid as gdid1, pgb.gdid as gdid2,
ST_Distance(pga.geopoint, pgb.geopoint) AS segdist,
(ST_Distance(pga.geopoint, pgb.geopoint) / EXTRACT(EPOCH FROM (pgb.timestamp - pga.timestamp + interval '0.1 second'))) as speed
FROM fl_pure_geodata AS pga
LEFT OUTER JOIN fl_pure_geodata AS pgb ON (pga.timestamp < pgb.timestamp AND pga.tid = pgb.tid)
ORDER BY pga.gdid ASC) AS sq
WHERE sq.gdid2 IS NOT NULL;
to fill a table with pairwise connected segements of geopoints. When I run the SELECT alone I get the correct pairs, but when I use it in the statement above, then some are paired the wrong way or not at all. Here's what I mean:
result of SELECT alone:
tid;gdid1;gdid2;distance;speed
"0f6fd522-5f1e-49a4-b85e-50f11ef7f908";10;11;34.105058803;31.0045989118182
"0f6fd522-5f1e-49a4-b85e-50f11ef7f908";11;12;90.099603143;14.7704267447541
"0f6fd522-5f1e-49a4-b85e-50f11ef7f908";12;13;23.331326565;21.2102968772727
result after INSERT with the same SELECT:
tid;gdid1;gdid2;distance;speed
"0f6fd522-5f1e-49a4-b85e-50f11ef7f908";10;12;122.574;17.2639603638028
"0f6fd522-5f1e-49a4-b85e-50f11ef7f908";11;12;90.0996;14.7704267447541
"0f6fd522-5f1e-49a4-b85e-50f11ef7f908";12;13;23.3313;21.2102968772727
What be the cause of that? It's exactly the same SELECT statement for the INSERT, so why does it give different results?
DISTINCT ON (pga.gdid) can pick any row from a set with equal pga.gdid. You can get different result even by execution the same query for several times. Add additional ordering to get consistent results. something like: pga.gdid ASC, pgb.gdid ASC
BTW You may want to order by pga.gdid ASC, pgb.timestamp - pga.timestamp ASC to get the "next" point.
BTW2 It may be easier to use lead() or lag() window functions to calculate differences between current row and next/previous. This way you wont need a self join and will likely get better performance.
You are ordering your query results only by the column pga.gdid, which is the same in all the rows, so postgres will order the results in a different way each time you do the select query.

How can I speed up this sql server query?

-- Holds last 30 valdates
create table #valdates(
date int
)
insert into #valdates
select distinct top (30) valuation_date
from tbsm.tbl_key_rates_summary
where valuation_date <= 20150529
order by valuation_date desc
select
sum(fv_change), sc_group, valuation_date
from
(select *
from tbsm.tbl_security_scorecards_summary
where valuation_date in (select date from #valdates)) as fact
join
(select *
from tbsm.tbl_security_classification
where sc_book = 'UC' ) as dim on fact.classification_id = dim.classification_id
group by
valuation_date, sc_group
drop table #valdates
This query takes around 40 seconds to return because the fact table has almost 13 million rows.. Can I do anything about this?
Based on the fact that there's no proper index that supports the fetch, that's probably the easiest (or only) option to really improve the performance. Most likely index like this would improve the situation a lot:
create index idx_security_scorecards_summary_1 on
tbl_security_scorecards_summary (valuation_date, classification_id)
include (fv_change)
Everything depends of course on how good the selectivity of the valuation_date and classification_id fields are (=how big portion of the table needs to be fetched) and might work better with the fields in opposite order. The field fv_change is in the include section so that it's included in the index structure so there's no need to fetch it from the base table.
Include fields help if the SQL has to fetch a lot of rows from the table. If the amount of rows that this touches is small, then it might not help at all. Like always in indexing, this of course slows down the inserts / updates, and is optimized for this case only and you should of course look at the bigger picture too.
The select is written in a little bit strange way, not sure if that makes any difference, but you could also try the normal way to do this:
select
sum(fact.c), dim.sc_group, fact.valuation_date
from
tbsm.tbl_security_scorecards_summary fact
join tbsm.tbl_security_classification dim
on fact.classification_id = dim.classification_id
where
fact.valuation_date in (select date from #valdates) and
dim.sc_book = 'UC'
group by
fact.valuation_date,
dim.sc_group
Looking at "statistics io" output should give you a good idea which table is causing the slowness, and looking at query plan to see if there's any strange operators might help to understand the situation better.

Which paging method (Sql Server 2008) for BEST performance?

In Sql Server 2008, many options are available for database paging via stored procedure. For example, see here and here.
OPTIONS:
ROW_NUMBER() function
ROWCOUNT
CURSORS
temporary tables
Nested SQL queries
OTHERS
Paging using ROW_NUMBER() is known to have performance issues:
Please advise, which paging method has the best performance (for large tables with JOINs) ?
Please also provide links to relevant article(s), if possible.
Thank You.
One question you have to answer is if you want to display the total number of rows to the end user. To calculate the number of the last page, you also need the last row number.
If you can do without that information, a temporary table is a good option. You can select the pirmary key and use LIMIT to retrieve keys up to the key you're interested in. If you do this right, the typical use case will only retrieve the first few pages.
If you need the last page number, you can use ROW_NUMBER(). Using a temporary table won't be much faster because you can't use the LIMIT clause, making this strategy the equivalent of a ROW_NUMBER() calculation.
We can get a rowcount using following query.
WITH data AS
(
SELECT ROW_NUMBER() OVER (order by memberid ) AS rowid, memberid
FROM Customer
)
SELECT *, (select count(*) from data) AS TotalCount
FROM data
WHERE rowid > 20 AND rowid <= 30

Resources