If I do 'show servers', it comes back with something like 40 results. If I connect to the database at the same time and do a count of connections using pg_stat_activity (only counting connections to pgbouncer), it comes back with something like 85 connections. I thought those counts should match. What am I misunderstanding? CentOS 7, Pgbouncer 1.8.1, Postgresql 9.6.10.
Related
I am a bit new to postgresql db. I have done a setup over Azure Cloud for my PostgreSQL DB.
It's Ubuntu 18.04 LTS (4vCPU, 8GB RAM) machine with PostgreSQL 9.6 version.
The problem that occurs is when the connection to the PostgreSQL DB stays idle for some time let's say 2 to 10 minutes then the connection to the db does not respond such that it doesn't fulfill the request and keep processing the query.
Same goes with my JAVA Spring-boot Application. The connection doesn't respond and the query keep processing.
This happens randomly such that the timing is not traceable sometimes it happens in 2 minutes, sometimes in 10 minutes & sometimes don't.
I have tried with PostgreSQL Configuration file parameters. I have tried:
tcp_keepalive_idle, tcp_keepalive_interval, tcp_keepalive_count.
Also statement_timeout & session_timeout parameters but it doesn't change anyway.
Any suggestion or help would be appreciable.
Thank You
If you are setting up PostgreSQL DB connection on Azure VM you have to be aware that there are Unbound and Outbound connections timeouts . According to
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#idletimeout ,Outbound connections have a 4-minute idle timeout. This timeout is not adjustable. For inbound timeou there is an option to change in on Azure Portal.
We run into similar issue and were able to resolve it on client side. We changed Spring-boot default Hikari configuration as follow:
hikari:
connection-timeout: 20000
validation-timeout: 20000
idle-timeout: 30000
max-lifetime: 40000
minimum-idle: 1
maximum-pool-size: 3
connection-test-query: SELECT 1
connection-init-sql: SELECT 1
I am using different databases to run my application. Recently I increased net_write_timeout value in mysql database, As I faced some timeout errors due to that. These errors are not occurred in Postgres or Mssql database.
My question is what is the equivalent flag of net_write_timeout (Mysql) for Postgres and Mssql database.
For SQL Server there are a couple of timeouts but nothing that directly corresponds to the net_write_timeout - afaik...
The default connection timeout (time allowed to connect to an instance) is 15 seconds. The default execution timeout (time allowed to execute a query etc) is 600. Connection timeout is set on the connection string, execution timeout on the connection in code, or at the server level by sp_configure
Refs: https://msdn.microsoft.com/en-AU/library/ms189040.aspx
and http://www.connectionstrings.com/all-sql-server-connection-string-keywords/
I am in the midst of evaluating default SQL Server 2008 R2 configuration settings.
I have been asked to run the below script on the production server:
sp_configure 'remote query timeout', 0
sp_configure 'max server memory (MB)', 28000
sp_configure 'remote login timeout', 300
go
reconfigure with override
go
Before proceeding on this, I have been trying to gauge the advantages and disadvantages of each line of SQL code.
Edited on 17-May-2016 14:19 IST:
Few Microsoft links that I have referred are as below:
https://msdn.microsoft.com/en-us/library/ms178067.aspx
https://msdn.microsoft.com/en-IN/library/ms175136.aspx
Edited on 23-May-2016 11:15 IST:
I have set the 'MAX SERVER MEMORY' based on feedback here and further investigation from my end. I have provided my inferences to the customer.
I have also provided my inferences on the other 2 queries based on views and answers provided here.
Thanks to all for your help. I will update this question after inputs from the customer.
Following Query will set the query timeout to 0 , i.e No timeout
sp_configure 'remote query timeout', 0
This value has no effect on queries received by the Database Engine.
To disable the time-out, set the value to 0. A query will wait until
it is canceled.
sp_configure 'max server memory (MB)', 28000
amount of memory (in megabytes) that is managed by the SQL Server
Memory Manager for a SQL Server process used by an instance of SQL
Server.
sp_configure 'remote login timeout', 300
If you have applications that connect remotely to server ,we can set timeout using the above query.
Note :
You can also set the server properties via SSMS (management studio) where you can set the maximum and minimum values rather using the codes as shown in your post.
You can very well try these queries ,but settings that you would like to opt in would depend on hardware and application type you are working on.
I would generally say that these statements are quite idiotic. Yes, seriously.
Line by line:
sp_configure 'remote query timeout', 0
Makes queries run unlimited time before aborting. While I accept there are long running queries, those should be rare (the default timeout of IIRC 30 seconds handles 99.99% of the queries) and the application programmer can set an appropriate timeout in the rare cases he needs it for this particular query.
sp_configure 'max server memory (MB)', 28000
Sets max server memory to 28gb. Well, that is nonsense - the DBA should have set that upon install to a sensible value, so it is not needed unless the dba is incompetent. Whether the roughly 28gb make sense I can not comment.
sp_configure 'remote login timeout', 300
Timeout for remote login 300 seconds. Default of 30 seconds already is plenty. I have serious problems considering a scenario where the server is healthy and does not process logins within less than a handful of seconds.
The only scenario I have seen where this whole batch would make sense is a server dying from overload - which is MOST OFTEN based on some brutal incompetence somewhere. Either the admin (64gb RAM machine configured to only use 2gb for SQL Server for example) or most often the programmers (no indices, ridiculous bad SQL making the server die from overload). Been there, seen that way too often.
Otherwise the timeouts really make little sense.
"Remote query timeout" sets how much time before a remote query times out.
"Remote login timeout" set how much time before a login attempt time out.
The values set here could make sense in certain conditions (slow, high-latency network, for example).
"Max server memory" is different. It's a very useful setting, and it should be set almost always to avoid possible performance problems. What value, it depends how much memory is on the server as whole and which other applications/service are running on it. If it's a dedicated server with 32 GB of memory, this value sounds about right.
None of these could be really tested on the test environment, I'm afraid, unless you have an 1:1 replica of the prod environment.
We ran out of space on our Production Server and during this time we started getting: "Cannot execute 'sp_replcmds' on " on Replication. The Distributor is the Publisher as well.
After fixing the space issue - this is the only error I'm getting on my Replication
We have five databases set-up for Replication. The four small databases work with no error messages except that the Last Synchronization Status says the following: "The process could not connect to Distributor "
The one large database gets the error in the subject and also that it cannot connect to the Distributor . The Error Code is: MSSQL_REPL22037
I checked the DBOwner and it is set up correctly. I stopped and started the Log Reader Agents too many times to count. I restarted the MSSQLServer Agent Processes on the Subscriber Server as well.
I solved this one myself. After all the other suggestions
It was definitely the BatchSize and the QueryTimeOut properties.
In order to change this:
Launch Replication Monitor.
Expand to the Publication in question.
Go to Agents Tab.
Right Click on Log Reader Agent > Agent Profile.
Create a New Agent Profile with the new parameters you need.
Set the New Profile to 'Use for this Agent'
Restart the Log Reader Agent and just wait.
Rinse/Repeat until you get the right amount.
I set the Timeout to 2400 and the BatchSize to 100 from 1800 and 500 respectively.
Can someone please assist me by explaining the commands to show the number of servers availble for a OpenEdge DB in PROENV.
Online documentations seems to be far and wide.
Problem is I'm trying to connect to OpenEdge DB via a ODBC but one of our OpenEdge DBs rejects the connection via ODBC stating (OpenEdge Broker Rejects the connection).
I'm presuming there are no SQL Servers available(openedge _mprosrv.exe), so the next step would be to check what is available/inuse - hence the question for the PROENV command.
Please note 3 out of 4 of our connections are through a MS SQL Server Server Linked Object (using the ODBC connection System DSN) - its just the final remaining OpenEdge DB with the broker is rejecting.
Thanks,
Richard
It looks like you want to look at PROMON in PROENV, specifically PROMON > R&D > Option 1 (Status Display) > Option 3 (Servers).
I'd also like to ask if you started the secondary broker for an SQL server type. If you are trying to connect to a 4GL broker with an SQL connection or vice-versa, the connection will be rejected.
From PROENV use "PROMON dbname" to start PROMON. Then go to the R&D menu (type R&D), and look at options 3 & 17 to see what is going on with servers.
You can also glean useful information from looking at the parameters that were used at db startup. Open dbname.lg with a text editor and find the last (333) message. This is the most recent db startup. The next 50 to 75 lines list the parameters and settings that were actually used. You are interested in settings related to -Mn, -Ma, -Mpb, -Mi and -S.