In one environment,the database is slow and query is taking time to run approximately 10 minutes beacause of which other threads are waiting for the object and entire jvm is getting hanged.In order to simulate the issue and to be sure that it is because of longer time for executing query,i want to intentionally run same query for 10 minutes(slowing down my query in my environment).We are using jdbc connectivity.Can anyone please suggest how to slow my query so that it will take 10 to 15 minutes to execute.We are not using query timeout.
what about using dbms_lock.sleep( Number_of_seconds ) in your query for delaying?
Related
I would like to ask, how can i change/increase time for executing some db statement in IDEA IntelliJ. I am working with Postgresql. And when I'm generating big number of data and trying to insert them into a table, it takes longer than 20 secs. And my IntelliJ has timeout just to 20 seconds.
Here is the image of my problem.
When it hits zero, it does nothing.
We have a DMV query that executes every 10 mins and inserts usage statistics, like SESSION_CURRENT_DATABASE, SESSION_LAST_COMMAND_STARTTIME, etc.. and supposedly has been running fine for the last 2 years.
Today we were notified by data hyperingestion team that the last records shown were from 6/10. So we found out the job has been stuck for 14 days not executing new statistics since. We've immediately restarted the job and it's been executing successfully since the morning, but basically we've lost the data during this 14 days period. Is there a way for us to execute this DMV query between 6/10-6/24 on the $SYSTEM.DISCOVER to recover these past 14 days of data?
Or all hope's lost?
DMV query:
SELECT [SESSION_ID]
,[SESSION_SPID]
,[SESSION_CONNECTION_ID]
,[SESSION_USER_NAME]
,[SESSION_CURRENT_DATABASE]
,[SESSION_USED_MEMORY]
,[SESSION_PROPERTIES]
,[SESSION_START_TIME]
,[SESSION_ELAPSED_TIME_MS]
,[SESSION_LAST_COMMAND_START_TIME]
,[SESSION_LAST_COMMAND_END_TIME]
,[SESSION_LAST_COMMAND_ELAPSED_TIME_MS]
,[SESSION_IDLE_TIME_MS]
,[SESSION_CPU_TIME_MS]
,[SESSION_LAST_COMMAND_CPU_TIME_MS]
,[SESSION_READS]
,[SESSION_WRITES]
,[SESSION_READ_KB]
,[SESSION_WRITE_KB]
,[SESSION_COMMAND_COUNT]
FROM $SYSTEM.DISCOVER_SESSIONS
I wouldn't say it's "gone" unless the instance has been restarted or the db has been detached. For example, the dmv for procedure usage should still have data in it, but you won't be able to specifically recreate what it looked like 10 days ago.
You can get a rough idea by looking back through the 2 years of data you already have, and get a sense of if there are spikes or consistent usage. Then, grab a snapshot of the DMV today, and extrapolate it back 14 days to get a rough idea of what usage was like.
We have a query that does a sql server select with IN that we had originally anticipated a few items (under 20?) -- now it's being asked for 8000. This causes a timeout.
Hibernate generates the query just fine, but as I understand it, sql server doesn't optimize for more than 64 items in an IN query at a time and performance falls off after that. We've proved this running some queries manually -- first result of 64 takes ~5 seconds, the rest comes in 2 seconds. The raw query takes minutes to complete.
Is there some way to tell hibernate to break this up or can (should?) I write some kind of extension/plugin for hibernate that says "if you ask for more than 64 items, break those up, thread them, stitch them back together"?
Goal --> Try to find the execution time the query will take to display result before executing the query.
Detail Description -->
I am trying to run an simple query , for Example one shown below,
I am trying to find out how much actual time the query will take before running the query
SELECT top 1 *
FROM table_name
The answer isn't fixed. You can run the query, have it take 5 seconds, repeat it and have it take less than one because some part of the query or results were cached.
Contention on tables makes a difference too. Selecting has to wait if it uses an index that's being updated.
All you can do is experiment to make estimates, potentially also using control L to get an estimated execution plan.
If I'm running the following query on PostgreSQL:
select *
from osm_pois_v06 pp
where pp.geom && ST_MakeEnvelope(8.174,48.298,12.431,50.930,4326);
I have to wait 1,34 minutes.
But if I'm doing an execution plan (explain analyze) with the mentioned query, then I have the following output:
The plan tells me that the execution time is 2,624 seconds. But why is it less than 1,34 minutes?
Because you are using pgAdmin to fetch a result set of over 600000 rows.
pgAdmin is known to be slow when displaying large result sets. You'd be better off with psql.
The easiest way to estimate the impact of slow client software and/or network connection on you query execution is to add 'create temp table as ' before your select. If that runs in 4 seconds, and 1:34 without the 'create temp' part, it is a safe bet that the bottleneck is in transferring the result set out of the DB and processing it on the receiving end.