I have to fetch n number of records from the database. The count of this record may very manager to manager. It means one manager can have 100+ records and other may have only 50+ records.
If it is all about fetching data from only one table then its super easy to get.
In my case the main pain point is I will get my result after using so many joins , temp tabels , functions , maths on some column and dates filter using switch cases and many more and yes each tables has 100k+ records with proper indexing.
I have added pagination in UI side so that I can get only 20 records at a time on screen. once I clicked on page number based on that I should offset the records for next 20. Supposed clicked on page number 3 then from db I should get only records from 41- 60.
UI part is not a big deal the point is how to optimise your query so that every time I should get only 20 records.
My current implementation is every time I am calling the same procedure with index value to offset the data. Is that correct way to run same complex with all functions , cte, cases in filters and inner/left joins again and again to fetch only piece of data from recordset.
Related
Here is my problem, I need to fetch some large record from various tables; to be exact, it consists of 30 tables. I did the join for the 30 tables, and it took 20 min just to fetch 200 rows.
I was thinking of creating a stored procedure to do some transactional DB call to fetch bit by bit of data and store it to a new report table.
Here is the nature of my business process:
In my web screen, I have 10 tabs of questionnaire need to be fill up by insurance client. Basically I need to fetch all questions and answers and put them in one row
The problem is, my client won't finish all the 10 tabs in one day, they might finish all the tabs in 3 days max
Initially I want to put a trigger for insert on the primary table and fetch all and put in a reporting table. But I only can get record for t+0, not t+1 or t+n. How am I going to update the same row if user updated another tab at another day?
To simplify my requirement, I have 10 tabs of questionnaire and to make it simpler for discussion, each tab has its own table. And to complete all the questionnaire doesn't required you to finish it in one day.
How am I going to fetch all the data using transactional SQL in a stored procedure?
I have table with 12000 records and I have a query where this table is joined with few tables + paging. I measure time using SET STATISTICS TIME ON/OFF. For first pages it's very fast but the closer to the last page the more time it takes. Is it normal?
This is normal because SQL Server has no way to directly seek to a given page of a logical query. It scans through a stream of results until it has arrived at the page you wanted.
If you want constant time paging you need to provide some kind of seek key on an index. For example if you can guarantee that your ID int column has consecutive values starting with 1 you can get any page in constant time simply by saying WHERE ID >= ... and ID < ....
I'm sure you'll find other approaches on the web but there's nothing built into the product.
I have a table in SQL Server 2012 that contains call detail records. A simplified version of the schema is shown in this SQLFiddle.
It's trivial to count calls for a given region, but I would like to further break the data down into discrete half-hour buckets. I am then feeding the data into a chart, so I need the query to be able to return all buckets, even if there we no calls in those buckets.
Any thoughts?
Additionally, I can't lose the offsets on those values (note they are DATETIMEOFFSET type). Most solutions I've found out there involve throwing away that data because they can only handle DATETIME.
Create a dim_time table or something to that extent and insert time ranges into this table (one for each half hour slot, can automate this population).
select time_id, time_start, time_end
from dim_time
Now you have a table with all the time slots you are interested in...left join it to a count query to get the counts associated to that time slot.
Once your code is in...you can alter this to 15 minute blocks, or 2 hours blocks or whatever, by manipulating the dim_time entries.
I'm new to pagination, so I'm not sure I fully understand how it works. But here's what I want to do.
Basically, I'm creating a search engine of sorts that generates results from a database (MySQL). These results are merged together algorithmically, and then returned to the user.
My question is this: When the results are merged on the backend, do I need to create a temporary view with the results that is then used by the PHP pagination? Or do I create a table? I don't want a bunch of views and/or tables floating around for each and every query. Also, if I do use temporary tables, when are they destroyed? What if the user hits the "Back" button on his/her browser?
I hope this makes sense. Please ask for clarification if you don't understand. I've provided a little bit more information below.
MORE EXPLANATION: The database contains English words and phrases, each of which is mapped to a concept (Example: "apple" is 0.67 semantically-related to the concept of "cooking"). The user can enter in a bunch of keywords, and find the closest matching concept to each of those keywords. So I am mathematically combining the raw relational scores to find a ranked list of the most semantically-related concepts for the set of words the user enters. So it's not as simple as building a SQL query like "SELECT * FROM words WHERE blah blah..."
It depends on your database engine (i.e. what kind of SQL), but nearly each SQL flavor has support for paginating a query.
For example, MySQL has LIMIT and MS SQL has ROW_NUMBER.
So you build your SQL as usual, and then you just add the database engine-specific pagination stuff and the server automatically returns only, say, row 10 to 20 of the query result.
EDIT:
So the final query (which selects the data that is returned to the user) selects data from some tables (temporary or not), as I expected.
It's a SELECT query, which you can page with LIMIT in MySQL.
Your description sounds to me as if the actual calculation is way more resource-hogging than the final query which returns the results to the user.
So I would do the following:
get the individual results tables for the entered words, and save them in a table in a way that you can get the data for this specifiy query later (for example, with an additional column like SessionID or QueryID). No pagination here.
query these result tables again for the final query that is returned to the user.
Here you can do paging by using LIMIT.
So you have to do the actual calculation (the resource-hogging queries) only once when the user "starts" the query. Then you can return paginated results to the user by just selecting from the already populated results table.
EDIT 2:
I just saw that you accepted my answer, but still, here's more detail about my usage of "temporary" tables.
Of course this is only one possible way to do it. If the expected result is not too large, returning the whole resultset to the client, keeping it in memory and doing the paging client side (as you suggested) is possible as well.
But if we are talking about real huge amounts of data of which the user will only view a few (think Google search results), and/or low bandwidth, then you only want to transfer as little data as possible to the client.
That's what I was thinking about when I wrote this answer.
So: I don't mean a "real" temporary table, I'm talking about a "normal" table used for saving temporary data.
I'm way more proficient in MS SQL than in MySQL, so I don't know much about temp tables in MySQL.
I can tell you how I would do it in MS SQL, but maybe there's a better way to do this in MySQL that I don't know.
When I'd have to page a resource-intensive query, I want do the actual calculation once, save it in a table and then query that table several times from the client (to avoid doing the calculation again for each page).
The problem is: in MS SQL, a temp table only exists in the scope of the query where it is created.
So I can't use a temp table for that because it would be gone when I want to query it the second time.
So I use "real" tables for things like that.
I'm not sure whether I understood your algorithm example correct, so I'll simplify the example a bit. I hope that I can make my point clear anyway:
This is the table (this is probably not valid MySQL, it's just to show the concept):
create table AlgorithmTempTable
(
QueryID guid,
Rank float,
Value float
)
As I said before - it's not literally a "temporary" table, it's actually a real permanent table that is just used for temporary data.
Now the user opens your application, enters his search words and presses the "Search" button.
Then you start your resource-heavy algorithm to calculate the result once, and store it in the table:
insert into AlgorithmTempTable (QueryID, Rank, Value)
select '12345678-9012-3456789', foo, bar
from Whatever
insert into AlgorithmTempTable (QueryID, Rank, Value)
select '12345678-9012-3456789', foo2, bar2
from SomewhereElse
The Guid must be known to the client. Maybe you can use the client's SessionID for that (if he has one and if he can't start more than one query at once...or you generate a new Guid on the client each time the user presses the "Search" button, or whatever).
Now all the calculation is done, and the ranked list of results is saved in the table.
Now you can query the table, filtering by the QueryID:
select Rank, Value
from AlgorithmTempTable
where QueryID = '12345678-9012-3456789'
order by Rank
limit 0, 10
Because of the QueryID, multiple users can do this at the same time without interfering each other's query. If you create a new QueryID for each search, the same user can even run multiple queries at once.
Now there's only one thing left to do: delete the temporary data when it's not needed anymore (only the data! The table is never dropped).
So, if the user closes the query screen:
delete
from AlgorithmTempTable
where QueryID = '12345678-9012-3456789'
This is not ideal in some cases, though. If the application crashes, the data stays in the table forever.
There are several better ways. Which one is the best for you depends on your application. Some possibilities:
You can add a datetime column with the current time as default value, and then run a nightly (or weekly) job that deletes everything older than X
Same as above, but instead of a weekly job you can delete everything older than X every time someone starts a new query
If you have a session per user, you can save the SessionID in an additional column in the table. When the user logs out or the session expires, you can delete everything with that SessionID in the table
Paging results can be very tricky. They way I have done this is as follows. Set an upperbound limit for any query that may be run. For example say 5,000. If a query returns more than 5,000 then limit the results to 5,000.
This is best done using a stored procedure.
Store the results of the query into a temp table.
Select Page X's amount of data from the temp table.
Also return back the current page and total number of pages.
I've got a query which returns 30 rows. I'm writing code that will paginate those 30 rows into 5 records per page via an AJAX call.
Is there any reason to return just those 5 records up the presentation layer? Would there be any benefits in terms of speed or does it just get all the rows under the hood anyways?
If so, how do I actually do it in Sybase? I know Oracle has Rownum and MS Sql has something similar, but I can't seem to find a similar function in Sybase.
Unless your record length is huge, the difference between 5 and 30 rows should be completely unnoticeable to the user. In fact there's a significant potential the multiple DB calls will harm performance more than help. Just return all 30 rows either to your middle tier or your presentation, whatever makes more sense.
Some info here:
Selecting rows N to M without Oracle's rownum?
I've never worked with Sybase, but here's a link that explains how to do something similar:
http://www.dbforums.com/sybase/1616373-sybases-rownum-function.html
Since the solution involves a temp table, you can also use it for pagination. On your initial query, put the 30 rows into a temporary table, and add a column for page number (the first five rows would be page 1, the next five page 2 and so on). On subsequent page requests, you query the temp table by page number.
Not sure how you go about cleaning up the temp table, though. Perhaps when the user's session times out?
For 30 records, it's probably not even worth bothering with pagination at all.
I think in sybase you can use
select top 5 * from table
where order-by-field > (last record of previous calls order-by-field)
order by order-by-field
just make sure you use the same order by each time.
As for benefit I guess it depends on how many rows we are talking and how big the table is etc.
I agree completely with jmgant, however, if you want to do it anyway, the process goes something like this:
Select top 10 items and store in X
Select top 5 items and store in Y
X-Y
This entire process can happen in 1 SQL statement.