I have tried with below SQL query.
SELECT
sql_id,
child_number,
sql_fulltext,
elapsed_time,
executions,
round(elapsed_time_avg) elapsed_time_avg
FROM
(
SELECT
command_type,
sql_id,
child_number,
sql_fulltext,
elapsed_time,
cpu_time,
disk_reads,
executions,
( elapsed_time / executions ) elapsed_time_avg
FROM
v$sql
WHERE
executions > 0
order by elapsed_time_avg desc
)
where rownum <=10;
I expect all the time top 10 expensive query from the database. my query fetched but after some time change the SQL_id (results change) with a same SQL query.
Your approach is correct. (Wowever, I suggest sorting by ELAPSED_TIME instead of an average, since it's the total run time that matters most. A million fast queries can be worst than one slow query.) But you just have to keep in mind that queries will disappear from V$SQL as they age out of the shared pool. And it's hard to predict exactly how long something will stay in the shared pool.
You might want to look at the active session history, in V$ACTIVE_SESSION_HISTORY, which usually stores many hours worth of data. And then look at DBA_HIST_ACTIVE_SESS_HISTORY, which stores 8 days of data by default. You'll have to adjust your queries, since those two views don't store sums, they store a row for each wait. You'll need to count the number of rows per SQL_ID to find the estimated wait time. (V$ACTIVE_SESSION_HISTORY samples once per second, DBA_HIST_ACTIVE_SESS_HISTORY samples once every 10 seconds.)
One of the most important thing to realize about tuning SQL is that you're not looking for perfection. You don't want to trace every single statement, or you'll go crazy. If you sample the system every X seconds, and a statement doesn't show up, then you almost certainly don't care about that statement. It's fine if slow statements disappear from the top N list.
Related
When you use the Snowflake TOP clause in a query, does the SQL Server engine stop searching for rows once it has enough to satisfy the TOP X needed to be returned?
I think it depends on the rest of your query. For example, if you use TOP 10 but don't supply an order by then yes, it will stop as soon as the 10 records are returned but your results are non-deterministic.
If do you use an order by, then the entire query has to be executed first before the top 10 results can be returned but your results will be deterministic.
Here is a real example. If I run a select on the SAMPLE_DATA.TPCH_SF10000.CUSTOMER table with a limit 10 it returns in 1.8 seconds (no caching). This table has 1,500,000,000 rows in it. If I then check the query plan it has only scanned a tiny portion of the table, 1 out of 6,971 partitions:
You can see that it will return when 10 records have been streamed back from the initial table scan since there is nothing more it has to do.
From my testing and understanding, it does not stop. You can see typically see that the last step in the execution plan is the "limit" step. You can also see what's going on by looking at the execution plans. You will typically see the LIMIT (or whatever) after full processing. Also, if you take a query that runs for say 20 seconds without a LIMIT (or similar) and add the LIMIT, you will typically not see any difference in the execution time (but be aware of fetch time). I typically run query performance testing in the UI to avoid issues with client side tools that can mislead you due to limits on fectching and/or use of cursors.
I'm processing a 260M row, ~1,500 column table in chunks through a model in Python. Using the connectors, I grab a chunk of 100,000 records each time. I'm using LIMIT and OFFSET to churn through the table. After each section I increase the OFFSET by the chunksize. As the OFFSET increases, the time the query runs increases to the point where each chunk takes me in excess of 45 minutes to grab toward the end. Here is a mock up of my query:
SELECT ~50_fields
FROM mytable
WHERE a_couple_conditions
ORDER BY my_primary_key
LIMIT 100000 OFFSET #########
Given the performance this is a particularly bad way to run this. I read that I might be able to use RESULT_SCAN to speed it up, but the docs said that I would still need to use ORDER BY against it, which seems to me may defeat the purpose. I actually don't care what order the records come into my process, just that I process each row exactly once.
Is there a way to get these queries running in a decent amount of time, of should I look into doing something like increasing the LIMIT dramatically for each chunk, then breaking it down further in my program? Any ideas or best practices on getting Snowflake to play ball?
What if you tried something like this?
SELECT ~50_fields, row_number() OVER (ORDER BY my_primary_key) as row_cnt
FROM mytable
WHERE a_couple_conditions;
and then loop through:
SELECT ~50_fields
FROM table(result_scan(query_id))
WHERE row_cnt BETWEEN x and xx;
where query_id is the query_id from the first statement. The initial select might take a long time to order the entire table, but the remaining chunks should be very quick and will not take longer and longer as you go.
All Python SQL clients that I'm aware of allow you to process the output of a query in batches. As an example, here's how snowflake-connector-python allows to retrieve result batches from a query:
with connect(...) as conn:
with conn.cursor() as cur:
# Execute a query.
cur.execute('select seq4() as n from table(generator(rowcount => 100000));')
# Iterate over a list of PyArrow tables for result batches.
for table_for_batch in cur.fetch_arrow_batches():
my_pyarrow_table_processing_function(table_for_batch)
With Snowflake in particular, the batch size can be controlled in megabytes (but not in rows, sadly) using the parameter CLIENT_RESULT_CHUNK_SIZE.
I have huge SQL Query. Probably 15-20 tables involved.
There are 6 to 7 subqueries which are joined again.
This query most of times takes a minute to run and return 5 million records.
So even if this query is badly written, it does have query plan that makes it finish in a minute. I have ensured that query actually ran and didn't use cached results.
Sometimes, the query plan gets jacked up and then it never finishes. I run a vacuum analyze every night on the tables involved in the query. The work_memory is currently set at 200 MB..I have tried increasing this to 2 GB as well. I haven't experienced the query getting messed when work_memory was 2 GB. But when i reduced it and ran the query, it got messed. Now when i increased it back to 2 GB, the query is still messed. Has it got something to do with the query plan not getting refreshed with the new setting ? I tried discard plan on my session.
I can only think of work_mem and vacuum analyze at this point. Any other factors that can affect a smoothly running query that returns results in a minute to go and and not return anything ?
Let me know if you need more details on any settings ? or the query itself ? I can paste the plan too...But the query and the plan or too big to be pasting here..
If there are more than geqo_treshold (typically 12) entries in the range table, the genetic optimiser will kick in, often resulting in random behaviour, as described in the question. You can solve this by:
increasing geqo_limit
move some of your table referencess into a CTE. If you already have some subqueries, promote one (or more) of these to a CTE. It is a kind of black art to identify clusters of tables in your query that will fit in a compact CTE (with relatively few result tuples, and not too many key references to the outer query).
Setting geqo_treshold too high (20 is probably too high ...) will cause the planner to need a lot of time to evaluate all the plans. (the number of plans increases basically exponential wrt the number of RTEs) If you expect your query to need a few minutes to run, a few seconds of planning time will probably do no harm.
How do I run the query below (from this MSDN article) to determine the top worst queries (by CPU time) but only for a set date?
-- Find top 5 queries
SELECT TOP 5 query_stats.query_hash AS "Query Hash",
SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time",
MIN(query_stats.statement_text) AS "Statement Text"
FROM
(SELECT QS.*,
SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE QS.statement_end_offset END
- QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
GO
Our database has just gone under serious strain in the last day and we cannot figure out the source of the problem.
We are using Azure SQL Database.
It's not possible to get statistics per day from the DMVs. dm_exec_query_stats has columns creation_time and last_execution_time which of course can give you some idea what has happened -- but that's only the first and last time that plan was used. The statistics will also be lost if the plan gets dropped out of plan cache, so you might not have that plan and its statistics anymore if the situation is now better (and the "bad" plans have been replaced by better ones).
That query shows the average CPU used by the queries, so it's not the perfect query for solving performance problems, because it really is average, so something with small execution count can be really high in the list even if it's really not a problem. I usually use total CPU and total logical reads for solving performance issues -- but those are total amounts since creation time, which might be a long time ago. In that case you might also considering dividing the numbers with hours since the creation time, so you'll get average CPU / I/O per hour. Also looking at max* columns might give some hints for the bad queries / plans.
If you have this kind of problems it might be a good idea to schedule that SQL as a task and gather the results somewhere. Then you can also use it as a baseline for comparing what has changed when the situation is bad. Of course in that case (and probably also otherwise) you should most likely look at more than just the top 5.
I have a database containing records collected every 0.1 seconds, and I need to time-average the data from a given day to once every 20 minutes. So I need to return a day's worth of data averaged to every 20 minutes which is 24*3 values.
Currently I do a separate AVG call to the database for each 20-minute period within the day, which is 24*3 calls. My connection to the database seems a little slow (it is remote) and it takes ~5 minutes to do all the averages. Would it be faster to do a single query in which I access the entire day's worth of data then average it to every 20 minutes? If it helps to answer the question, I have to do some arithmetic to the data before averaging, namely multiplying several table columns.
You can calculate the number of minutes since midnight like:
datepart(hh,datecolumn)*60 + datepart(mi,datecolumn)
If you divide that by 20, you get the number of the 20 minute interval. For example, 00:10 would fall in interval 0, 00:30 in interval 1, and 15:30 in interval 46, and so on. With this formula, you can group on 20 minute intervals like:
select
(datepart(hh,datecolumn)*60 + datepart(mi,datecolumn)) / 20 as IntervalNr
, avg(value)
from YourTable
group by (datepart(hh,datecolumn)*60 + datepart(mi,datecolumn)) / 20
You can do math inside the avg call, like:
avg(col1 * col2 - col3 / col4)
In general reducing the number of queries is a good idea. Aggregate and do whatever arithmetic/filtering/grouping you can in the query (i.e. in the database), and then do 'iterative' computations on the server side (e.g. in PHP).
To be sure whether it would be faster or not, it should be measured.
However it should be faster, as you have a slow connection to the database, and this way the number of roundtrips has a bigger impact on the total time of execution.
How about a stored procedure on your database? If your database engine doesn't support one, how about having a script or something doing the math and populating a separate 'averages' table on your database server. Then you only have to read the averages from the remote client once a day only.
Computation in one single query would be slightly faster. Think of the overhead on multiple requests like setting up the connection, parsing the query or loading the stored procedure, etc.
But also make sure that you've accurate indicies which may result in a hugh performance increase. Some operations on hugh databases may last from minutes to hours.
If you are sending a lot of data, and the connection is the bottleneck, how and when you group and send the data doesn't matter. There is no good way to send 100MB every 10 minutes over a 56k modem. Figure out the size of your data and bandwidth and be sure you can even send it.
That said:
First be certain the network is the bottleneck. If so, try to work with a smaller data set if possible, and test different scenarios. In general, 1 large record set will use less bandwidth than 2 recordsets that are half the size.
If possible add columns to your table and compute and store the column product and interval index (see Andomar's post) every time you post data to the database.