React-select causing slow load when rendering lots of component instances - reactjs

I have an app that is essentially a table view with many rows and columns of select inputs and each select input can have 1000s of options. Each column in the table view is the same set of data per select input and each row is a different document in the database. When using regular selects with this many options inside, we get crazy lag. So I looked around and decided to use a combo of two packages.
Demo example: https://codesandbox.io/s/react-windowed-select-inside-react-window-dh24r?file=/index.js
My plan was to combine react-window with react-windowed-select, but this too has pretty bad performance issues. The lag is especially noticeable on initial render. (notice the load time when clicking on the React Select Rows button).
What is the best way to handle this sort of application in React? I'm currently thinking there may be 2 options though I'm sure there are many others.
Find a way to improve performance using the existing two
packages.
Write something far more custom that loads all of the
rows with plain text and a single react-windowed-select component
per column that is hidden. When the user clicks on what would be a
select input, we position that columns select input as if it was in
the clicked location and open it.
Any help or advice is appreciated.

Related

TClientDataset to edit a table with 100k+ records

A client wants to build a worksheet-like application to show data from a database (presumably on a TDbGrid or similar), allowing free search and edition of all cells, as you would do in a worksheet. Underlying table will have more than 100k rows.
The problem with using TClientDataset, is that it tends to load all data into memory, violating user requisites, that are these 3:
User will be able navigate from first to last record at any moment using scroll bar, keyboard, or a search filter (note that TClientDataset will load all records if you go to last record, AFAIK...).
Connection will be through external VPN / internet (possibly slow), so only the actual visible records on screen should be loaded. Never all.
Editions must be kept inside a transaction, so they can be committed or rollbacked at the end, and reconciling if necessary.
Is it possible to accomplish this 3 points using TClientDataset?
If not, what are the alternatives?
I'm answering just by your last line regarding alternatives, I can add some suggestions:
1- You can use some creativity, provide pagination and fetch let's say 100 rows per page using a thread which is equipped with a nice progress bar in the UI. in this method you must manage search and filters by some smart queries, reloading data sometimes, etc...
2- Use third party components that optimized for this purpose like SDAC + EhLib Dbgrid.
SDAC is a dataset that can be useful for cache updates and EhDBGrid has a MemTable component inside it which is very powerful, free search, fuzzy match or approximate search work nicely, possible to revert, undo and redo, etc...

Excel Scalability and Speed Issues (VBA, Array and Comboboxes)

Context
There are two excel.workbooks in the same location: database and dashboards. Whereas database.workbook has as many tabs as clients I manage, dashboard.workbook has as many tabs as reports are required.
Navigation across report's (dashboard.worksheets) it's pretty simple. On each report there's a combobox that contains every dashboard.worksheets' names. Selecting any report on that combobox hides the current worksheet/report and open the desired one.
In each tab/report there is a second combobox that allows you to select a client, populating the report with the selected client's data.
The report
The information in the database looks like this:
Date|Device|Group|Subgroup|metric1|metric2|metric3|etc.
The information displayed in the report (in the one I'm having issues with) looks like this:
Group|metric1|2|3|...
The issues
1) Currently the group is displayed like this:
=IFERROR(LOOKUP(2,1/(COUNTIF($C$17:C18,IF($C$8="Goldsmiths",Client1_GroupName,IF($C$8="Client2",Client2_GroupName,IF($C$8="Client3",Client3_GroupName,IF($C$8="Client4",Client4_GroupName)))))=0),IF($C$8="Client1",Client1_GroupName,IF($C$8="Client2",Client2_GroupName,IF($C$8="Client3",Z2,Client3_GroupName($C$8="Client4",Client4_GroupName))))),"")
The combobox prints its value into Range("C8"). Through a nested ifs structure the formula identifies the client and then pulls a unique list of groups from the selected client tab (from database.workbook).
One issue is that it is very messy and hard to escalate (the more clients I get, its complexity growth exponentially). I bet there are easiest ways to do it (maybe VBA?).
It can be quite slow, the more "groups" we get and more days recorded into the database, more slow it will get.
2) Pulling the data
Most of the data to pull can be done through array formulas like this one:
={SUM((Client1_GroupName=C20)*Metric1)}
It sums all the Metric1 for the group matching C20,C21,22,23 (in that c20:xx range we have the first formula pulling the Group list.
I haven't added the nested ifs yet. It's going to be a pain to do it across 5 more columns. Again very hard to escalate.
This can be terribly slow. It comes a point that changing client means waiting 2 or 3 minutes to process the array.
Conclusions
I guess what I'm seeking is some advice on how to face this issues, which essentially are: scalability and speed.

what is the best way to handle large amount of data in dashboard?

i have created a simple dashboard which have 10 to 15 widgets. so each widget are created with more than 100000 data. so there will be more than 1500000 records, how to handle it own browser.
The dashboard which i have created is just hangup.
I don't think you can do much on the frontend, but at the backend, if you are able to change something there, I would suggest you to query only the data that is required.
When you use charts let say for showing a timeline about the sales per month you would be using group by in your sql code. This will reduce the amount of data should be significantly less because you will get only the records which are required to show instead of manipulating the result in code.
If you use a datatable handle pagination within your query, instead of pulling all data from the database, which will affect performance and will need time to load, you can pull for example the first 100 and load the next 100 records of data when the user clicks on next page or scroll downs (like how facebook does with their timeline). You can also consider to use an in-memory database like Redis.
Hope this helps.

Tableau Peformance- Custom SQL Queries

I am essentially building gone report that ingests two types of data. One is the receptionists data. Which is each receptionists stats by day. But then the data gets a little more granular and is each call for each receptionist.
Essentially the report does two things gives receptionist performance then a person can click and prompt the same dashboard sheet to update with specific call log etc.
So basically this data set is huge and held as an export so it will be faster an I limit the data to this month and last month (minimum requirement). I have also eliminated any unnecessary columns.
I am curious if I should create two separate custom queries in Tableau then create referential field or should I bring both custom queries inside of one workbook and join them together. At first I had the two connections separate but now I brought them together and am noticing some performance issues. What are some of my options?
It would be better to have two seperate queries since for the first view doesnt need all the additional details you want to show in the drill down.
Use an action filter and link the two sheets(which use different data sources) by selecting the specific fields when configuring the action filter.
Performance wise this is a good approach.

Sorting in batches

I have a Java servlet which queries a DB and shows table in the browser. I have implemented pagination so that when the user scroll the table the new request are made only then. The problem is if the user chooses to sort the table in UI based on some column, the request takes a long time because actual table in DB is quite big and it sorts the entire table and then send the sorted paged data to the client/browser. So suppose if the table has 100k rows and I have a page size of 100 rows, so is there a way to tweak the sorting in DB or in pagination in servlet that the sorting the entire 100k rows is not required.
Pagination may help. So, here is how it usually done. Unlike, the old paginated page by page loading web pages, when you have single page scroll. You usually have a drop down which lists the sorting column.
You first load the first page, as soon as the page's bottom appear, you request next-page's query via AJAX. Until here, I guess you are fine.
What seems to be troubling you that if the user has scrolled, 10 pages deep and then he sorts, you will have to load 10 pages of data in one go. This is wrong.
Two things,
When you change sorting criteria, you changed the context which you were viewing the table in.
Assume, if you load 10 pages and keep the user at 10th page. He will be surprised what happened.
So, as soon as user changed the sort criteria, you clean the DIV and load page 1, by new criteria. You see you do not have burden now. Do you?
Couple of quick-tips:
I personally think it's better to leave DBMS to do the sorting and pagination. I think it's made to do that. You should write optimized queries.
Indexing columns to be sorted-by helps some.
If items do not change frequently, you may want to cache the pages (results from DB) with an appropriate TTL.
Leverage DBMS provided special functions to optimize the query. Here is an article for MySQL

Resources