Enable scrolling+paging in ADF table - oracle-adf

I want to create a table that implements normal paging but I want it to load 100 records per page and allow scrolling to view these records. My table is only big enough to display 18 records, therefore scrolling is desired to view all 100 records and perform bulk update operations.
I've tried scollPolicy="loadMore", but this is not the desired solution.

To have pagination on an ADF table when there is more than 100 records you have to :
Set the AutoHeightRow attribute to the number of lines you want your table to display (aka the height of your table) In your case 18
set the scrollPolicy attribute to page so your table is gonna display a pagination for more that 100 rows
Set the FetchSize attribute to how many row you want to preload (in your case 100)(It's better if you set it in your iterator RangeSize attribute)

If you are using a View Object, make sure to tune it to support the Range Size. In the VO editor, General Tab, under tuning, set the Fetch to at least three (3) more than Range Size. In this case it would be 103.

Related

SQL : Display Chunk data from huge list performance

I have to fetch n number of records from the database. The count of this record may very manager to manager. It means one manager can have 100+ records and other may have only 50+ records.
If it is all about fetching data from only one table then its super easy to get.
In my case the main pain point is I will get my result after using so many joins , temp tabels , functions , maths on some column and dates filter using switch cases and many more and yes each tables has 100k+ records with proper indexing.
I have added pagination in UI side so that I can get only 20 records at a time on screen. once I clicked on page number based on that I should offset the records for next 20. Supposed clicked on page number 3 then from db I should get only records from 41- 60.
UI part is not a big deal the point is how to optimise your query so that every time I should get only 20 records.
My current implementation is every time I am calling the same procedure with index value to offset the data. Is that correct way to run same complex with all functions , cte, cases in filters and inner/left joins again and again to fetch only piece of data from recordset.

Data Table Rendering Issue for 10k records

We are displaying 10,000 rows in table in the browser.
The example given in https://datatables.net/extensions/scroller/examples/initialisation/large_js_source.html shows we can show 50,000 records very quickly
We are using Angular with ng-repeat which takes 2 minutes to load the table. We have created 5 columns. In which one of the column has a condition options which will be enabled based on certain conditions for each rows. Whenever we add few options to the button it takes more time
example, edit, delete, view etc., in the action button which is part of a column
We observed whenever we add option along with the ng-repeat with the options takes huge time to render the table.
How to speed up the rendering time ??? Please advise.
Thanks.

Does posts_per_page prevent a full scan of posts table?

I'm trying to prevent the full scan of the wp_post table with 100K+ records on our homepage. I only want the first 30 with no pagination.
If I'm not using pagination, and using post_per_page to limit the query does it still scan the whole table and then return the first 30 or does it go from row 1 to 30 then stop?
WordPress loop executed, while loading any template, scans only once with defined limit. This limit is defined in Settings > Reading.
Whereas, if you are using post_per_page with query_posts then it will re-execute the query. It is not recommended for large websites.
Refer: Additional SQL Queries section here.

The setting 'auto create statistics' causes wildcard TEXT field searches to hang

I have an interesting issue happening in Microsoft SQL when searching a TEXT field. I have a table with two fields, Id (int) and Memo (text), populated with hundreds of thousands of rows of data. Now, imagine a query, such as:
SELECT Id FROM Table WHERE Id=1234
Pretty simple. Let's assume there is a field with Id 1234, so it returns one row.
Now, let's add one more condition to the WHERE clause.
SELECT Id FROM Table WHERE Id=1234 AND Memo LIKE '%test%'
The query should pull one record, and then check to see if the word 'test' exists in the Memo field. However, if there is enough data, this statement will hang, as if it were searching the Memo field first, and then cross referencing the results with the Id field.
While this is what it is appearing to do, I just discovered that it is actually trying to create a statistic on the Memo field. If I turn off "auto create statistics", the query runs instantly.
So my quesiton is, how can you disable auto create statistics, but only for one query? Perhaps something like:
SET AUTO_CREATE_STATISTICS OFF
(I know, any normal person would just create a full text index on this field and call it a day. The reason I can't necessarily do this is because our data center is hosting an application for over 4,000 customers using the same database design. Not to mention, this problem happens on a variety of text fields in the database. So it would take tens of thousands of full text indexes if I went that route. Not to mention, adding a full text index would add storage requirements, backup changes, disaster recovery procedure changes, red tape paperwork, etc...)
I don't think you can turn this off on a per query basis.
Best you can do would be to identify all potentially problematic columns and then CREATE STATISTICS on them yourself with 0 ROWS or 0 PERCENT specified and NORECOMPUTE.
If you have a maintenance window you can run this in it would be best to run without this 0 ROWS qualifier but still leave the NORECOMPUTE in place.
You could also consider enabling AUTO_UPDATE_STATISTICS_ASYNC instead so that they are still rebuilt automatically but this happens in the background rather than holding up compilation of the current query but this is a database wide option.

optimising sql select statement

Is there a way to fetch 4 million records from SQL Server 2005 in under 60 seconds?
My table consists of 15 columns. Each has datatype of varchar(100) and there is no primary key.
Assuming you want the entire contents of the table then try this first:
SELECT col1, col2, ... col15 FROM your_table
If that is too slow then there's not really anything more you can do apart from change your program design so that it is not necessary to fetch so many rows at once.
If this records will be displayed in a graphical user interface you could consider using paging instead of fetching all the rows at once.
Actually last time I did something like this, i put a filter dropdown and then the records would be filtered using the filter user selects. I also give the option "All" in the dropdown selecting which I show the user a message like "Retrieving all records will be bit slow. Want to continue?". And in any case, as Mark suggested, I used paging .

Resources