Does posts_per_page prevent a full scan of posts table? - database

I'm trying to prevent the full scan of the wp_post table with 100K+ records on our homepage. I only want the first 30 with no pagination.
If I'm not using pagination, and using post_per_page to limit the query does it still scan the whole table and then return the first 30 or does it go from row 1 to 30 then stop?

WordPress loop executed, while loading any template, scans only once with defined limit. This limit is defined in Settings > Reading.
Whereas, if you are using post_per_page with query_posts then it will re-execute the query. It is not recommended for large websites.
Refer: Additional SQL Queries section here.

Related

SQL : Display Chunk data from huge list performance

I have to fetch n number of records from the database. The count of this record may very manager to manager. It means one manager can have 100+ records and other may have only 50+ records.
If it is all about fetching data from only one table then its super easy to get.
In my case the main pain point is I will get my result after using so many joins , temp tabels , functions , maths on some column and dates filter using switch cases and many more and yes each tables has 100k+ records with proper indexing.
I have added pagination in UI side so that I can get only 20 records at a time on screen. once I clicked on page number based on that I should offset the records for next 20. Supposed clicked on page number 3 then from db I should get only records from 41- 60.
UI part is not a big deal the point is how to optimise your query so that every time I should get only 20 records.
My current implementation is every time I am calling the same procedure with index value to offset the data. Is that correct way to run same complex with all functions , cte, cases in filters and inner/left joins again and again to fetch only piece of data from recordset.

Solution for loading hint from Database

I'm using selectbox for your to choose multiple username. The username are retrieved from database and i use select username from users. Data are loaded all when the page rendered.
For now it worked because doesn't have many users, I assume that the table has 1 millions records then loading all of the table will take plenty of time. If i send request for query when user starts typing, it will not fast enough to retrieve data.
So how to solve this?
You'll need to ensure a minimum of 3-4 characters are supplied to the backend query (delay the query until 3-4 chars are entered), then perform a 'starts with' lookup on an INDEXED column in your database.
This should restrict the data searched/returned. Ensure the query is indexed!
Use pagination technique. Run the query to retrieve 100 records. Then if still scrolling, can retrieve more. Must be possible.

KeyLookup in massive columns in sql server

I have one simple query which has multiple columns (more than 1000).
When i run with single column it gives me result in 2 seconds with proper index seek, logical read, cpu and every thing is under thresholds.
But when i select more than 1000 columns it takes 11 mins for the result and gives me key lookup.
You folks have you faced this type of issue?
Any suggestion on that issue?
Normally, I would suggest to add those columns in the INCLUDE fields of your non-clustered index. Adding them in the INCLUDE removes the LOOKUP in the execution plan. But as everything with SQL Server, it depends. Depending on how the table is used i.e, if you're updating the table more than just plain SELECTing on it, then the LOOKUP might be ok.
If this query is run once per year, the overhead of additional index is probably not worth it. If you need quick response time, that single time of the year when it needs to be run, look into 'pre executing' it and just present the result to the user.
The difference in your query plan might be because of join elimination (if your query contains JOINs with multiple tables) or just that the additional columns you are requesting do not exist in your currently existing indexes...

SQL Server query with paging - different time for different pages

I have table with 12000 records and I have a query where this table is joined with few tables + paging. I measure time using SET STATISTICS TIME ON/OFF. For first pages it's very fast but the closer to the last page the more time it takes. Is it normal?
This is normal because SQL Server has no way to directly seek to a given page of a logical query. It scans through a stream of results until it has arrived at the page you wanted.
If you want constant time paging you need to provide some kind of seek key on an index. For example if you can guarantee that your ID int column has consecutive values starting with 1 you can get any page in constant time simply by saying WHERE ID >= ... and ID < ....
I'm sure you'll find other approaches on the web but there's nothing built into the product.

Selecting rows between x and y from database

I've got a query which returns 30 rows. I'm writing code that will paginate those 30 rows into 5 records per page via an AJAX call.
Is there any reason to return just those 5 records up the presentation layer? Would there be any benefits in terms of speed or does it just get all the rows under the hood anyways?
If so, how do I actually do it in Sybase? I know Oracle has Rownum and MS Sql has something similar, but I can't seem to find a similar function in Sybase.
Unless your record length is huge, the difference between 5 and 30 rows should be completely unnoticeable to the user. In fact there's a significant potential the multiple DB calls will harm performance more than help. Just return all 30 rows either to your middle tier or your presentation, whatever makes more sense.
Some info here:
Selecting rows N to M without Oracle's rownum?
I've never worked with Sybase, but here's a link that explains how to do something similar:
http://www.dbforums.com/sybase/1616373-sybases-rownum-function.html
Since the solution involves a temp table, you can also use it for pagination. On your initial query, put the 30 rows into a temporary table, and add a column for page number (the first five rows would be page 1, the next five page 2 and so on). On subsequent page requests, you query the temp table by page number.
Not sure how you go about cleaning up the temp table, though. Perhaps when the user's session times out?
For 30 records, it's probably not even worth bothering with pagination at all.
I think in sybase you can use
select top 5 * from table
where order-by-field > (last record of previous calls order-by-field)
order by order-by-field
just make sure you use the same order by each time.
As for benefit I guess it depends on how many rows we are talking and how big the table is etc.
I agree completely with jmgant, however, if you want to do it anyway, the process goes something like this:
Select top 10 items and store in X
Select top 5 items and store in Y
X-Y
This entire process can happen in 1 SQL statement.

Resources