I have enabled the logging for Postgres queries which are taking longer than 5000 ms.
But in logs, there are lot of queries that are taking less time than the specified time when I ran those queries for the same DB using pgAdmin.
Related parameter set in parameter group:
log_min_duration_statement = 5000ms
log_statement = all
Are all the queries being logged? or am I missing something?
You need to change log_statement to not log all queries
log_statement = none
and instead, enable logging slow queries
log_min_duration_statement = 5000ms
these two settings are not related to each other.
Consult Postgres Documentation for more details about configurations.
Related
I have a Hangfire service running on one of my servers, I'm a DBA and sometimes I'm asked to trigger jobs using the Dashboard, but it takes me a lot of time to connect to the jobs' server due to some connectivity and security issues.
And to overcome that, I want to trigger those jobs by inserting in Hangfire's tables on the database, I can already query those tables to find which job executed when and whether they failed, succeeded or still enqueued, does anyone know an approach to do so?
I've included a sample of two tables which I think will be used to do this trick, their names are Hash and Set respectively:
Hangfire normally uses a gui like swagger in .net (http://localhost:5000/hangfire) , there should be a immediate trigger feature. If not a second option is changing the cron expression for every minute or maybe every 30 seconds.
Does anyone has experience with maximum execution time of Flyway migrations
What is the maximum execition time, if set by Flyway (or this depends on dababase settings primarily)?
What will happen when this time is hit?
What if multiple migrations are in chain and one of them timeouts, what will happen?
I have been unable to find any related information in docs or any articles.
Flyway itself currently does not set a timeout or maximum execution time. The timeout is managed by the target database and the settings on your connection to it.
There is a github issue thread here on this topic if you would like a timeout to be added and would like to share your scenario with the flyway team.
What happens when you hit a timeout (or if there is a network or other failure which causes the query to disconnect) will vary depending on how you are using transactions and whether your target database supports DDL statements within a transaction.
In order to investigate query plan usage I'm trying to understand what kind of query plan is stored in the memory.
Using this query:
SELECT objtype AS 'Cached Object Type',
COUNT(*) AS 'Numberof Plans',
SUM(CAST(size_in_bytes AS BIGINT))/1048576 AS 'Plan Cache SIze (MB)',
AVG(usecounts) AS 'Avg Use Counts'
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY objtype
I got almost empty plan cache structure. .
There is 128Gb of RAM on the server and ~20% is free. SQL Server instance is not constrained by memory.
Yes basically I have Adhoc queries (not parameterized, not stored procedures).
But why SQL Server empties the query plan cache so frequent? What kind of issue do I have?
Finally, only instance restart solved my problem. Now plan cache looks more healthy.
If the server isn't under memory pressure then some other possibilities from the plan caching white paper are below.
Are any of these actions scheduled frequently? Do you have auto close enabled?
The following operations flush the entire plan cache, and therefore,
cause fresh compilations of batches that are submitted the first time
afterwards:
Detaching a database
Upgrading a database to SQL Server 2005
Upgrading a database to SQL Server 2008
Restoring a database
DBCC FREEPROCCACHE command
RECONFIGURE command
ALTER DATABASE ,,, MODIFY FILEGROUP command
Modifying a collation using ALTER DATABASE … COLLATE command
The following operations flush the plan cache entries that refer to a
particular database, and cause fresh compilations afterwards.
DBCC FLUSHPROCINDB command
ALTER DATABASE … MODIFY NAME = command
ALTER DATABASE … SET ONLINE command
ALTER DATABASE … SET OFFLINE command
ALTER DATABASE … SET EMERGENCY command
DROP DATABASE command
When a database auto-closes
When a view is created with CHECK OPTION, the plan cache entries of the database in which the view is created are flushed.
When DBCC CHECKDB is run, a replica of the specified database is created. As part of DBCC CHECKDB's execution, some queries against the
replica are executed, and their plans cached. At the end of DBCC
CHECKDB's execution, the replica is deleted and so are the query plans
of the queries posed on the replica.
The following sp_configure/reconfigure operations also clear the procedure cache:
access check cache bucket count
access check cache quota
clr enabled
cost threshold for parallelism
cross db ownership chaining
index create memory
max degree of parallelism
max server memory
max text repl size
max worker threads
min memory per query
min server memory
query governor cost limit
query wait
remote query timeout
user options
I had the same issue just about a week ago and also posted several questions. Even though I have not actually found the answer to the problem I 've got some insight on the process. And silly as it sounds SQL Server service restart helped but raised another problem - the recovery process continued for 4 hours. Seems like a pretty large transaction was in place...
empty-plan-cache-problem
Almost empty plan cache
Almost empty plan cache
I have a few "inefficient" queries that I am trying to debug on Azure SQL (v12). The problem I have is that after the query executes for the first time (albeit, many seconds) Azure appears to cache the query / execution plan. I have done some research and several people have suggested adding and removing a column will clear the cache but this doesn't seem to work. If I leave the server alone for a few hours / overnight and re-run the query it takes its usual time to execute but once again the cache is in place - this makes it very hard to optimise my query. Does anyone know how to force Azure SQL to not cache my queries / execution plans?
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE is designed to help wit this problem.
https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql?view=sql-server-2017
This is closest to the DBCC FREEPROCCACHE you have in SQL Server but is scoped to a database instead of the server instance. This does not prevent caching of query plans - it just invalidates the current cache entries.
Please note that the query store is there to help you in SQL Azure (on-by-default). It stores a history of plan choices and plan performance (per-plan). So, if you have a prior plan that performs better available in the history of your application, you can force it using SSMS if you'd prefer to have the query optimizer pick this plan each time your query compiles. One common reason for what you are seeing is parameter-sensitivity in the plan choice where the optimizer will use the passed parameter value to try to generate the query plan, assuming it is representing a common pattern when you run that query. If that value is actually not close to a common value (in terms of how frequent it is in the table), then you can sometimes compile and cache a plan that is not better on average for your application.
Query store has an overview here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
Note that SQL Azure also has an automated mechanism to try forcing prior plans if it notices a performance regression. It is somewhat conservative, however, so it may not kick in for every single regression until it sees an obvious pattern over time. So, while you can force things in SSMS, you can also potentially just wait (assuming this is the issue you were seeing)
I have a performance issue with a method that calls org.hibernate.Query#list. The duration of the method call vary over time: it usually lasts about one second but some days, for maybe half a day, it takes about 20 seconds.
How can this issue be resolved? How can the cause for this issue be determined?
More elements in the analysis of this issue:
Performance issues have been observed in production environment, but the described issue is in a test environment.
The issue has been observed for at least several weeks but the date of its origin is unknown.
The underlying query is a view (select) in MS SQL Server (2008 R2):
Database reads/writes in this test environment are from a few users at a time only: the database server should not be overly sollicited and the data only changes slowly over time.
Executing the exact query directly from a MS SQL Server client always takes less than a second.
Duplicating the database (using the MS SQL Server client to backup the database and restore this backup as a new database) does not allow to reproduce the problem: the method call results in being fast on the duplicate.
The application uses Hibernate (4.2.X) and Java 6.
Upgrading from Hibernate 3.5 to 4.2 has not changed anything about the problem.
The method call is always with the same arguments: there is a test method that does the operation.
Profiling the method call (using hprof) shows that when it is long, most of the time is spent on "Object.wait" and "ref.ReferenceQueue.remove".
Using log4jdbc to log the underlying query duration during the method call shows the following results :
query < 1s => method ~ 1s
query ~ 3s => method ~ 20s
The query generates POJO as described in the most up-voted answer from this issue.
I have not tried using a Constructor with all attributes as described in the most up-voted answer from this other similar issue because I do not understand what effect that would have.
A possible cause of apparently random slowness with an Hibernate query is the flushing of the session. If some statements (inserts, updates, deletes) in the same transaction are unflushed, the list method of Query might do an autoflush (depending on the current flush mode). If that's the case, the performance issue might not even be caused by the query on which list() is called.
It seems the issue is with MS SQL Server and the updating of procedure's plan: following DBCC FREEPROCCACHE, DBCC DROPCLEANBUFFERS the query and method times are consistent.
A solution to the issue may be to upgrade MS SQL Server: upgrading to MS SQL Server 2008 R2 SP2 resulted in the issue not appearing anymore.
It seems the difference between the duration of the query and that of the method is an exponential factor related to the objects being returned: most of the time is spent on a socket read of the result set.