Is it possible to set zabbix server for oftener pushing data to database? We found out in postgres log that zabbix server pushes data to database every 5 seconds. How to update this interval for more realtime monitoring ?
You can probably change CONFIG_HISTSYNCER_FREQUENCY in src/zabbix_server/server.c to a lower value. It is 5 seconds by default.
Related
I am using DBeaver to connect to my MS SQL database hosted in local. I try to export my tables as CSV files. In the case where the query is rather big (40k rows which takes a couple of minutes) the export gets stopped with the message
"SQL Error: The connection is closed".
I kept the default parameters for dbeaver database connection, and my SQL server timeout is the default one (10 minutes, which is more than it takes to trigger the error)
Any idea where it might come from?
You know, the value of binary is extremely large and weight. So that takes much time to transfer via the network. That's the reason why you're getting error. In my opinion,
You should split your query into multiple time to fetch data (How about 1k records in each time).
Just get the exactly items that you need (where condition or the columns that you need instead of all)
Every database driver allows to configure the connectTimeout, a parameter that declares for how long the client (dbeaver) will wait before deciding something went wrong.
You can change this parameter right-clicking on the name of the server, choosing Edit Connection, then clicking on the Driver properties tab and searching for the connectTimeoutparameter (or something equivalent). Increase the value you found there.
I had this problem with PostgreSQL 13, found a connectTimeout = 20ms and increased it to 200ms to overcome the issue.
An old MySQL driver showed a connectTimeout = 20000, most probably in nanoseconds.
I am a bit new to postgresql db. I have done a setup over Azure Cloud for my PostgreSQL DB.
It's Ubuntu 18.04 LTS (4vCPU, 8GB RAM) machine with PostgreSQL 9.6 version.
The problem that occurs is when the connection to the PostgreSQL DB stays idle for some time let's say 2 to 10 minutes then the connection to the db does not respond such that it doesn't fulfill the request and keep processing the query.
Same goes with my JAVA Spring-boot Application. The connection doesn't respond and the query keep processing.
This happens randomly such that the timing is not traceable sometimes it happens in 2 minutes, sometimes in 10 minutes & sometimes don't.
I have tried with PostgreSQL Configuration file parameters. I have tried:
tcp_keepalive_idle, tcp_keepalive_interval, tcp_keepalive_count.
Also statement_timeout & session_timeout parameters but it doesn't change anyway.
Any suggestion or help would be appreciable.
Thank You
If you are setting up PostgreSQL DB connection on Azure VM you have to be aware that there are Unbound and Outbound connections timeouts . According to
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#idletimeout ,Outbound connections have a 4-minute idle timeout. This timeout is not adjustable. For inbound timeou there is an option to change in on Azure Portal.
We run into similar issue and were able to resolve it on client side. We changed Spring-boot default Hikari configuration as follow:
hikari:
connection-timeout: 20000
validation-timeout: 20000
idle-timeout: 30000
max-lifetime: 40000
minimum-idle: 1
maximum-pool-size: 3
connection-test-query: SELECT 1
connection-init-sql: SELECT 1
We have a spring java application that connects to a MS SQL server cluster of 2 nodes (2016 SP2 standard version).
We are testing failover: if a node fails, the application needs 90 seconds before reconnecting to the other node, that will be too much for production.
After reading and reading again the HickaryCP documentation for java, I tried to test this scenario with datagrip: I run a long query (insert a line in a table every 500 ms during 10 minutes) and I get the same issue: the database was unavailable for 90 seconds after 1 node failure.
Maybe the issue is cluster side and not application side...
Is there any SQL server cluster configuration that prevent us to reconnect before 90 seconds?
How can the connection be back before these 90 seconds? is there any caching or default configuration that we should update?
Thanks a lot for your help
EDIT
The test was wrong, I updated in comments the issue I am getting:
it reconnects as soon as the 1st node is back. The issue is after a second failover: no connection can be established then (I wait for the 2 nodes synchronization before the 2nd failover)
I am using different databases to run my application. Recently I increased net_write_timeout value in mysql database, As I faced some timeout errors due to that. These errors are not occurred in Postgres or Mssql database.
My question is what is the equivalent flag of net_write_timeout (Mysql) for Postgres and Mssql database.
For SQL Server there are a couple of timeouts but nothing that directly corresponds to the net_write_timeout - afaik...
The default connection timeout (time allowed to connect to an instance) is 15 seconds. The default execution timeout (time allowed to execute a query etc) is 600. Connection timeout is set on the connection string, execution timeout on the connection in code, or at the server level by sp_configure
Refs: https://msdn.microsoft.com/en-AU/library/ms189040.aspx
and http://www.connectionstrings.com/all-sql-server-connection-string-keywords/
I have a SQL Server 2008 database, and I need a mergereplication because i want to sync with mobile devices afterwards.
So I created a replication but when it comes to start the snapshotagent, the agent tries to start for about 20 minutes and then it shows the message
The replication agent has not logged a progress message in 10 minutes.
This might indicate an unresponsive agent or high system activity.
Verify that records are being replicated to the destination and that
connections to the Subscriber, Publisher, and Distributor are still
active.
There aren't any other errormessages, neither in the snapshot-agent-status-window nor in the agent-log-window.
I don't have the administrator of the domain, but the local administrator and a domainuser with admin-privilegs. Both have all rights to database, are in the access-list of the replication.
The server agent runs on the local administrator-account and there are 3 MergeReplications on the server, working
The job runs also under the local administrator.
Thank you for your help, Karl
So it works again...
Maybe someone else has got the same issue one day, so i post the solution here:
I researched on the server and found out, the sql server service is running under a local user. The reason for this is, that there were problems with the backupsystem, used by our customers and so they changed it years ago.
Because of the local user account a 15404-Error occures.
Knowing, that i mustn't use domain-accounts, I also solved the initial problem with my snapshot-agent. I searched for hours (nearly days ;) ) and it was just this little change:
When the Replication is created, the job is created too. The job has three steps. The Job-owner is the local-admin, also for the server-agent-service. But the second step of my job (replictionsnapshot) has one setting: run as. And by default this isn't the job-owner but the user running the creation, in my case my domain-account.
Now, that I set it to the local-administrator as well everything works fine again.
Thanks, Karl
I had the same issue, And the below fixed the issue. The replication agent was timing out after 10 minutes and changing the heartbeat from 10 to 30 minutes solved the issue,
Run the below command
exec sp_changedistributor_property #property = 'heartbeat_interval', #value = 30;
and then restart the sql agent on the subscriber to continue syncing.