NTP Configuration without Internet - ntp

I am trying to setup a local NTP Server without Internet Connection.
Below is my ntp.conf on Server
# Server
server 127.127.1.0
fudge 127.127.1.0 stratum 5
broadcast 10.108.190.255
Below is my ntp.conf on Clients
# Clients
server 10.108.190.14
broadcastclient
but my clients are not sync with the server. Output to ntpq -p on Clients show that they are not taking time from the server, and server ip is show at stratum 16
Could any one please help in this issue.

The server should use its local clock as the source. A better set up is to use orphan mode for isolated networks which gives you fail-over. Check out the documentation:
http://www.eecis.udel.edu/~mills/ntp/html/orphan.html

You need to configure the clients with th e prefer keyword. ntpd tries its hardest not to honor local undisciplined clocks in order to prevent screwups.
server 10.108.190.14 prefer
For more information see: http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3658
This is all assuming that you have included the full and entire ntp.con and did not leave out any bits about restrict lines.

How about using chrony?
Steps
Install chrony in both your devices
sudo apt install chrony
Let's assume the server IP address 192.168.1.87 then client configuration (/etc/chrony/chrony.conf) as follows:
server 192.168.1.87 iburst
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
Server configuration (/etc/chrony/chrony.conf), assume your client IP is 192.168.1.14
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
local stratum 8
manual
allow 192.0.0.0/24
allow 192.168.1.14
Restart chrony in both computers
sudo systemctl stop chrony
sudo systemctl start chrony
5.1 Checking on the client-side,
sudo systemctl status chrony
`**output**:
июн 24 13:26:42 op-desktop systemd[1]: Starting chrony, an NTP client/server...
июн 24 13:26:42 op-desktop chronyd[9420]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 -DEBUG)
июн 24 13:26:42 op-desktop chronyd[9420]: Frequency -6.446 +/- 1.678 ppm read from /var/lib/chrony/chrony.drift
июн 24 13:26:43 op-desktop systemd[1]: Started chrony, an NTP client/server.
июн 24 13:26:49 op-desktop chronyd[9420]: Selected source 192.168.1.87`
5.1 chronyc tracking output:
Reference ID : C0A80157 (192.168.1.87)
Stratum : 9
Ref time (UTC) : Thu Jun 24 10:50:34 2021
System time : 0.000002018 seconds slow of NTP time
Last offset : -0.000000115 seconds
RMS offset : 0.017948076 seconds
Frequency : 5.491 ppm slow
Residual freq : +0.000 ppm
Skew : 0.726 ppm
Root delay : 0.002031475 seconds
Root dispersion : 0.000664742 seconds
Update interval : 65.2 seconds
Leap status : Normal

Related

Connection drop from postgresql on azure virtual machine

I am a bit new to postgresql db. I have done a setup over Azure Cloud for my PostgreSQL DB.
It's Ubuntu 18.04 LTS (4vCPU, 8GB RAM) machine with PostgreSQL 9.6 version.
The problem that occurs is when the connection to the PostgreSQL DB stays idle for some time let's say 2 to 10 minutes then the connection to the db does not respond such that it doesn't fulfill the request and keep processing the query.
Same goes with my JAVA Spring-boot Application. The connection doesn't respond and the query keep processing.
This happens randomly such that the timing is not traceable sometimes it happens in 2 minutes, sometimes in 10 minutes & sometimes don't.
I have tried with PostgreSQL Configuration file parameters. I have tried:
tcp_keepalive_idle, tcp_keepalive_interval, tcp_keepalive_count.
Also statement_timeout & session_timeout parameters but it doesn't change anyway.
Any suggestion or help would be appreciable.
Thank You
If you are setting up PostgreSQL DB connection on Azure VM you have to be aware that there are Unbound and Outbound connections timeouts . According to
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#idletimeout ,Outbound connections have a 4-minute idle timeout. This timeout is not adjustable. For inbound timeou there is an option to change in on Azure Portal.
We run into similar issue and were able to resolve it on client side. We changed Spring-boot default Hikari configuration as follow:
hikari:
connection-timeout: 20000
validation-timeout: 20000
idle-timeout: 30000
max-lifetime: 40000
minimum-idle: 1
maximum-pool-size: 3
connection-test-query: SELECT 1
connection-init-sql: SELECT 1

SQL Server commit and working set

There's a SQL Server instance MSSQLSERVER running on local host in windows 7. I realized that its commit is much larger than its working set. Here’s a comparison between my local instance and another instance MS_MSBI_SSDS running on Windows Server 2008R2.
Local SQL Server
Image PID Hard Faults/sec Commit(KB) Working Set (KB) Sharable(KB)
sqlservr.exe 2380 0 45 615 948 61 992 17 784
Remote SQL Server
Image PID Hard Faults/sec Commit(KB) Working Set (KB) Sharable(KB) Private(KB)
sqlservr.exe 1964 1 6 464 988 5 496 884 40 608 5 456 636
The large amount of commit makes the local machine almost unusable. The commit charge is at 100% when MSSQLSERVER launched. Please notice that there isn’t any particular process running on the local SQL Server. And it has 2 databases (8GB), copied from the remote one.
My questions are
Why the local instance has a large commit when it has only a small working set?
Can I find what have been actualy committed ?
How to decrease its commit charge ?
Might the problem come from McAfee ? I don't have right to modify it due to company policy. What can I do ? Here's a relative post SQLSERVR.EXE High Commit Usage causing a low virtual memory condition.

stratum value of NTP server in ntpq output

I have a stupid question, but I don't understand and i'd like to :)
I've set up a server mysrv and different client machines. The server is used as NTP server, and is configured with a Undisciplined Local Clock : with fudge 127.127.1.0 stratum 5
First question, if I well understand, my NTP server is now set as stratum 5 ?
Now on my clients, when I type a ntpq -p to check, they are synchronized so it's cool, but they see mysrv as stratum 6 (the column st of ntpq -p indicates 6) ... I was expecting 5...
Why ?
Thanx a lot
Your reasoning in the comment is correct. A server's stratum is X+1 where X is the stratum of the time source that the machine is synced to. Reference clocks, ie GPSDO, Rubidium, etc, are stratum 0. The time servers connected to these sources are stratum 1.

How can i configure a load balanced ntp servers

i want to use ntp.nict.jp server for ntp in my linux machine .
When i am configuring this server am getting the following ntp status :
[root#machine~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
ntp-a2.nict.go. .INIT. 16 u - 64 0 0.000 0.000 0.000
What i could see is that the domain ntp.nict.jp is load balanced to different domains
ntp-b3.nict.go.jp(133.243.238.164)
ntp-a3.nict.go.jp(133.243.238.244)
ntp-b2.nict.go.jp(133.243.238.163)
ntp-a2.nict.go.jp(133.243.238.243)
and if i am configuring any of the domains in the above list , the ntp works fine .
i wish to configure the entire servers under ntp.nict.jp for ntp.
How can i do this ?
You can use the "pool" command in the ntp.conf file as follows.
pool ntp.nict.jp iburst

Controlling Nagios Login Frequency When Monitoring Remote Hosts

I've constructed a Nagios remote-host monitoring setup (non-NRPE), and it's functional and useful, except:
Somehow, I found that the Nagios host logs in to various remote hosts, only to log out one second later (if not in that same second), every 3 minutes or so; how often it does this doesn't appear to be deterministic. These logins don't coincide with any check periods I've defined.
From an arbitrary member of my remote host array's auth.log:
Feb 25 10:51:11 MACHINE sshd[3590]: Accepted publickey for nagios from 10.1.2.110 port 54069 ssh2
Feb 25 10:51:11 MACHINE sshd[3590]: pam_unix(sshd:session): session opened for user nagios by (uid=0)
Feb 25 10:51:11 MACHINE sshd[3599]: Received disconnect from 10.1.2.110: 11: disconnected by user
Feb 25 10:51:11 MACHINE sshd[3590]: pam_unix(sshd:session): session closed for user nagios
And then, three minutes later:
Feb 25 10:54:10 MACHINE sshd[3632]: Accepted publickey for nagios from 10.1.2.110 port 54176 ssh2
Feb 25 10:54:10 MACHINE sshd[3632]: pam_unix(sshd:session): session opened for user nagios by (uid=0)
Feb 25 10:54:10 MACHINE sshd[3642]: Received disconnect from 10.1.2.110: 11: disconnected by user
Feb 25 10:54:10 MACHINE sshd[3632]: pam_unix(sshd:session): session closed for user nagios
I can't figure it. My service follows the generic-service template, which I've modified for a slightly longer check-interval and max-check-attempts. Why is Nagios on this serial login spree?
Have you checked your host definitions? What do you use for 'check-host'? If that performs a check 'through' an NRPE check (rather than something like a 'local' check-ping), then it could be logging in as well.
Also you can check your Nagios log file to see what checks are actually being performed. I usually perform a 'tail -f nagios.log | grep [IP_ADDRESS_of_target_host]' to narrow the results to a specific machine.
If nothing is showing up there, in a last ditch effort you can enable debugging and check the Nagios debug file - EVERYTHING Nagios does will go into this file. As the debug file tends to roll very quickly (at least in our install - >6.8K checks), you may have to get creative with 'grep' to find what you're looking for.
If the check is returning a CRITICAL/WARNING state, it could be that your retry_interval is set to 3 minutes, which I believe is the default. Doublecheck your service template in nagios/etc/objects/templates

Resources