ODBC driver for SQL server ignores connection timeout attributes - sql-server

I connect with sqlcmd from system 1 to system 2
./sqlcmd -S <system2_ip_address> -U myLogin -P myPassword -t 7
System 2 has a running MSSQL server. I can then read on system 1, from the databases existing on system 2.
1> select * from dbo.spt_monitor
2> go
If I cut off network connection between system 1 and system 2, and try to read again on system 1, it ignores the previously set SQL_ATTR_QUERY_TIMEOUT with the -t 7 . If system 1 is a windows machine, the query always timeouts after 15 seconds. If system 1 is Unix, the query never times out and runs indefinitely. It only recognizes that connection to system 2 was lost when I recover the network connection.
How can I configure the timeout? Both -t ( SQL_ATTR_QUERY_TIMEOUT) and -l (SQL_ATTR_LOGIN_TIMEOUT) is completely ignored and if called from a windows machine, timeout is always 15 seconds, if called from a unix machine, timeout is always infinite.

Don't you need to set the -Querytimeout=xxxx param?

Related

Connection drop from postgresql on azure virtual machine

I am a bit new to postgresql db. I have done a setup over Azure Cloud for my PostgreSQL DB.
It's Ubuntu 18.04 LTS (4vCPU, 8GB RAM) machine with PostgreSQL 9.6 version.
The problem that occurs is when the connection to the PostgreSQL DB stays idle for some time let's say 2 to 10 minutes then the connection to the db does not respond such that it doesn't fulfill the request and keep processing the query.
Same goes with my JAVA Spring-boot Application. The connection doesn't respond and the query keep processing.
This happens randomly such that the timing is not traceable sometimes it happens in 2 minutes, sometimes in 10 minutes & sometimes don't.
I have tried with PostgreSQL Configuration file parameters. I have tried:
tcp_keepalive_idle, tcp_keepalive_interval, tcp_keepalive_count.
Also statement_timeout & session_timeout parameters but it doesn't change anyway.
Any suggestion or help would be appreciable.
Thank You
If you are setting up PostgreSQL DB connection on Azure VM you have to be aware that there are Unbound and Outbound connections timeouts . According to
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#idletimeout ,Outbound connections have a 4-minute idle timeout. This timeout is not adjustable. For inbound timeou there is an option to change in on Azure Portal.
We run into similar issue and were able to resolve it on client side. We changed Spring-boot default Hikari configuration as follow:
hikari:
connection-timeout: 20000
validation-timeout: 20000
idle-timeout: 30000
max-lifetime: 40000
minimum-idle: 1
maximum-pool-size: 3
connection-test-query: SELECT 1
connection-init-sql: SELECT 1

Advice on mariadb replication issue

I am running mariadb 5.5.2 on two Centos 7.1.1503 bare metal Dell servers. The servers are each 16 months old. They were never rebooted until July 2017. Call the first server salt01 call the second one salt02. salt02 was rebooted first, salt01 was rebooted next.
Since then, have noticed that the db on salt02 is missing entries we see on salt01. Those records co-incide with the reboot; that is, data is missing since then but previous data is present on salt02.
iptables is not running on these two servers.
This appears to be replication issue.
We have two ways to fix:
Follow a re-sync procedure that goes like this:
At the master:
RESET MASTER;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
mysqldump -u root -p --all-databases > /a/path/mysqldump.sql
UNLOCK TABLES;
and on slave:
STOP SLAVE;
mysql -uroot -p < mysqldump.sql
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=valuefromshowmasterstatus;
START SLAVE;
Fix replication
Notice this in file /etc/my.cnf
bind-address = 127.0.0.1
on salt02 which is believed to be slave. How critical is this? Could point bind-address to master salt01 and restart mariadb on salt02
Was wondering about thoughts on which way to go here. I'm not a dba. Many thanks for your thoughts? Any questions feel free.

what is the default value for write timeout in postgres and MSSQL database connections?

I am using different databases to run my application. Recently I increased net_write_timeout value in mysql database, As I faced some timeout errors due to that. These errors are not occurred in Postgres or Mssql database.
My question is what is the equivalent flag of net_write_timeout (Mysql) for Postgres and Mssql database.
For SQL Server there are a couple of timeouts but nothing that directly corresponds to the net_write_timeout - afaik...
The default connection timeout (time allowed to connect to an instance) is 15 seconds. The default execution timeout (time allowed to execute a query etc) is 600. Connection timeout is set on the connection string, execution timeout on the connection in code, or at the server level by sp_configure
Refs: https://msdn.microsoft.com/en-AU/library/ms189040.aspx
and http://www.connectionstrings.com/all-sql-server-connection-string-keywords/

NTP Configuration without Internet

I am trying to setup a local NTP Server without Internet Connection.
Below is my ntp.conf on Server
# Server
server 127.127.1.0
fudge 127.127.1.0 stratum 5
broadcast 10.108.190.255
Below is my ntp.conf on Clients
# Clients
server 10.108.190.14
broadcastclient
but my clients are not sync with the server. Output to ntpq -p on Clients show that they are not taking time from the server, and server ip is show at stratum 16
Could any one please help in this issue.
The server should use its local clock as the source. A better set up is to use orphan mode for isolated networks which gives you fail-over. Check out the documentation:
http://www.eecis.udel.edu/~mills/ntp/html/orphan.html
You need to configure the clients with th e prefer keyword. ntpd tries its hardest not to honor local undisciplined clocks in order to prevent screwups.
server 10.108.190.14 prefer
For more information see: http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3658
This is all assuming that you have included the full and entire ntp.con and did not leave out any bits about restrict lines.
How about using chrony?
Steps
Install chrony in both your devices
sudo apt install chrony
Let's assume the server IP address 192.168.1.87 then client configuration (/etc/chrony/chrony.conf) as follows:
server 192.168.1.87 iburst
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
Server configuration (/etc/chrony/chrony.conf), assume your client IP is 192.168.1.14
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
local stratum 8
manual
allow 192.0.0.0/24
allow 192.168.1.14
Restart chrony in both computers
sudo systemctl stop chrony
sudo systemctl start chrony
5.1 Checking on the client-side,
sudo systemctl status chrony
`**output**:
июн 24 13:26:42 op-desktop systemd[1]: Starting chrony, an NTP client/server...
июн 24 13:26:42 op-desktop chronyd[9420]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 -DEBUG)
июн 24 13:26:42 op-desktop chronyd[9420]: Frequency -6.446 +/- 1.678 ppm read from /var/lib/chrony/chrony.drift
июн 24 13:26:43 op-desktop systemd[1]: Started chrony, an NTP client/server.
июн 24 13:26:49 op-desktop chronyd[9420]: Selected source 192.168.1.87`
5.1 chronyc tracking output:
Reference ID : C0A80157 (192.168.1.87)
Stratum : 9
Ref time (UTC) : Thu Jun 24 10:50:34 2021
System time : 0.000002018 seconds slow of NTP time
Last offset : -0.000000115 seconds
RMS offset : 0.017948076 seconds
Frequency : 5.491 ppm slow
Residual freq : +0.000 ppm
Skew : 0.726 ppm
Root delay : 0.002031475 seconds
Root dispersion : 0.000664742 seconds
Update interval : 65.2 seconds
Leap status : Normal

maximum concurrent connections for ubuntu?

i would like to ask how many concurrent connections can ubuntu handle? i heard that xp a maximum of 10 connections.. i will use it as an OS for my small server and will be using mysql on it..
You are probably making all of the tcp connections from one process and hitting the default limit on number of open files per process.
You can confirm that limit with 'ulimit -a' or 'ulimit -n' at a shell prompt.
You could increase that limit with 'ulimit -n ' or setrlimit() as root or by editing /etc/security/limits.conf .
Also see...
Increasing the maximum number of tcp/ip connections in linux
Is there a limit on number of tcp/ip connections between machines on linux?

Resources