I have an angular 4 UI, web API and SQL server in the back-end. I use a third party grid from ag grid (https://www.ag-grid.com/).
Here are some info about my scenario :
I have total 1 million records in SQL server
I need to group the data based on user selection from the UI. Typically, after the grouping, the number of records to fetch from DB becomes something around 200 thousands.
3.User will do some filtering from the UI as well which makes the number of records to fetch will become something around 4000 to 5000.
I implemented paging (page size 100) at server side. I send offset and number of records to fetch from DB . The problem in this case is, since it is a huge data , my grouping at the database takes 15 seconds to finish which is quite a lot of time.
Another approach I tried is, get the paged data from DB without grouping and do the grouping in the API which is kind of finishing in milliseconds. The problem here is, I am showing some values in the UI based on groups such as count and sum of the some fields , the data shown in the currently fetched data might not be correct/complete as the the current group may span to the other pages in the DB which are not fetched yet.
Any recommendations to deal with this kind of scenario would be appreciated.
Thanks.
Related
I have to fetch n number of records from the database. The count of this record may very manager to manager. It means one manager can have 100+ records and other may have only 50+ records.
If it is all about fetching data from only one table then its super easy to get.
In my case the main pain point is I will get my result after using so many joins , temp tabels , functions , maths on some column and dates filter using switch cases and many more and yes each tables has 100k+ records with proper indexing.
I have added pagination in UI side so that I can get only 20 records at a time on screen. once I clicked on page number based on that I should offset the records for next 20. Supposed clicked on page number 3 then from db I should get only records from 41- 60.
UI part is not a big deal the point is how to optimise your query so that every time I should get only 20 records.
My current implementation is every time I am calling the same procedure with index value to offset the data. Is that correct way to run same complex with all functions , cte, cases in filters and inner/left joins again and again to fetch only piece of data from recordset.
Here is my problem, I need to fetch some large record from various tables; to be exact, it consists of 30 tables. I did the join for the 30 tables, and it took 20 min just to fetch 200 rows.
I was thinking of creating a stored procedure to do some transactional DB call to fetch bit by bit of data and store it to a new report table.
Here is the nature of my business process:
In my web screen, I have 10 tabs of questionnaire need to be fill up by insurance client. Basically I need to fetch all questions and answers and put them in one row
The problem is, my client won't finish all the 10 tabs in one day, they might finish all the tabs in 3 days max
Initially I want to put a trigger for insert on the primary table and fetch all and put in a reporting table. But I only can get record for t+0, not t+1 or t+n. How am I going to update the same row if user updated another tab at another day?
To simplify my requirement, I have 10 tabs of questionnaire and to make it simpler for discussion, each tab has its own table. And to complete all the questionnaire doesn't required you to finish it in one day.
How am I going to fetch all the data using transactional SQL in a stored procedure?
I'm using selectbox for your to choose multiple username. The username are retrieved from database and i use select username from users. Data are loaded all when the page rendered.
For now it worked because doesn't have many users, I assume that the table has 1 millions records then loading all of the table will take plenty of time. If i send request for query when user starts typing, it will not fast enough to retrieve data.
So how to solve this?
You'll need to ensure a minimum of 3-4 characters are supplied to the backend query (delay the query until 3-4 chars are entered), then perform a 'starts with' lookup on an INDEXED column in your database.
This should restrict the data searched/returned. Ensure the query is indexed!
Use pagination technique. Run the query to retrieve 100 records. Then if still scrolling, can retrieve more. Must be possible.
I am using a Silverlight DataGrid with a DomainDataSource and a DataPager and EF 4
When using MSSQL server profiler, I noticed 2 queries which we taking the bulk of the data retrieval time. One query gets the data for the given load size, and another which gets the total page count. The one getting the page count is very slow for large sets of data, much slower than getting the data itself!
So my question is this: Is it possible to suppress this query? I know the datapager needs to know how many pages there are but I think I can work around that if I have to
Thanks
setting 'IncludeTotalCount' on the query to false worked.
I did this in the override method of Load in the database context but iguess it can be done on a specific query.
I have made a search but couldn't find a solution which works for me.
I just wonder how Facebook or Linkedin manages to handle same type activity with one sentence?
I mean, if you store every activity with different IDs in an Activity Table, how can you list them as "Member_a and 15 more people changed their photos"
I'm trying to make a social activity wall for my web-site, it's not that big but I just wanted to know the logic on this situation.
For example, when first page loads, I make an Ajax call and listing 0-10 records and if user scrolls down, page makes another ajax call which lists 11-20 records.
Now; if I try to combine same type of activity after sql select query with using if else, if this 10 records are the same, the user will only see 1 item. I hope I could explain what I want to say :)
So, I need a solution which makes this query in SQL Statement.
I'm not asking from you to write a query for me, I just want to know the logic.
Here is a screenshot what I want to achieve:
You see, they are actually different stored data but they combined it and made it as a 1 item network update.
By the way, I'm using C# and SQL Server 2008.
for example:
SELECT Min(b.MemberName), COUNT(*) as Total FROM Network_Feed a
JOIN Member b on a.MemberID = b.MemberID
WHERE a.FeedType = 1
did I understand your question right?
It's not easy to manage petabytes of data as a one table. So, big projects running on SQL Server are used some advanced scaling(distributing data and load) tricks like Service Brokers and Replication.
You can check
http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000004532 as an SQL Server example.