How do I remove and change ntpd configurations using ntpq? - ntp

Assume ntpd reads the following configuration when started:
keys /etc/ntp.keys
trustedkey 1 2 3
requestkey 2
server <server1_IP> key 1
server <server2_IP>
As describd in the ntpq documentation, its possible to send configuration commands to the server as run-time configuration commands in the same format as the configuration file, using :config [...].
The sent commands will be added to the run-time configuration, therefore if I want to add a new server, I run ntpq -c ":config server <server3_IP>". If I want to remove an association, ntpq -c ":config unpeer <server2_IP>".
How can I
change configurations? E.g. Key identified by keyID 3 is not trusted anymore and have to be removed from trustedkeys
remove existing configurations? E.g. how do I remove the requestkey ?
I need those functionalities because I have to be able to reconfigure ntpd at runtime without restarting it.

You can remove configs with ntpq -c "keyid <your_keyid>" -c "passwd <your_md5_password" -c ":unconfig <server_IP>"
For changing I'm not sure, but removing and adding should work.
Though it's supported by ntpq, it's not advised to use this functionality according to comp.protocols.time.ntp.

Related

How do I fix stale postmaster.pid file on Postgres?

I went to view a postgres schema on my local (MacOS 11) database, and was not able to view my schemas.
Connection Refused in DBeaver!
So I opened my postgres desktop application, and got the warning message, "Stale postmaster.pid file"
How can I fix this?
The problem is that the postmaster.pid file needs to be removed manually and regenerated by postgres, and these are the steps to do it. (Keep in mind the version might change, var-12, might be var-13 etc)
Open your terminal, and cd into the postgres directory: cd /Users/$USER/Library/Application\ Support/Postgres
Make sure you're in the right place, ls you should see something like var-12 or var-<version #>
Verify the file is there, ls var-12 (keep in mind the var-XX is equivalent to your PGSQL version)
Verify the Postgres server is not running by viewing desktop app
Version might change so could be var-12, var-13, etc etc depending on age of this article.
Library/Application Support/Postgres
➜ ls var-12
PG_VERSION pg_hba.conf pg_replslot pg_subtrans postgresql.auto.conf
base pg_ident.conf pg_serial pg_tblspc postgresql.conf
global pg_logical pg_snapshots pg_twophase postgresql.log
pg_commit_ts pg_multixact pg_stat pg_wal postmaster.opts
pg_dynshmem pg_notify pg_stat_tmp pg_xact postmaster.pid <----
Then remove postmaster.pid, rm var-12/postmaster.pid
or rm var-<PG version #>/postmaster.pid
Go back to your console, start your postgres server, and it should be functioning again, and you should have full access to your schemas.
cd /Users/<UserName>/Library/Application\ Support/Postgres/
ls
Then we may view any of the directory names, such as var-9, var-10, var-11, etc... depending on which PostgreSQL version we have installed.
Following that, if we can see var-11,
cd var-11
rm postmaster.pid
In my case I used the code rm var-13/postmaster.pid and solved.

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

How to correctly add additional SOLR 5 (vm) nodes to SOLR Cloud

I have a SOLR / Zookeeper / Kafka setup. Each on separate VMs.
I have successfully run this all using two SOLR 4.9 vms (Ubuntu)
Now I wish to build two SOLR 5.4 vms and get it all working again.
Essentially, "Upgrade by Replacement"
I have "hacked" a solution to my problem but that makes me very nervous.
To begin, Zookeeper is running. I turn off my SOLR 4.9 vms and delete the config out of Zookeeper (not necessarily in that order... ;-) )
Now, I start up my 'solr5' VM (and SOLR in cloud mode) where I have installed SOLR 5.4 according to the "Production Install" instructions on the SOLR Wiki. I have also installed 5.4 on 'solr6', but it's not running yet.
I issue this command on the 'solr5' machine:
/opt/solr/bin/solr create -c fooCollection -d /home/john/conf -shards 1 -replicationFactor 1
and I get the following output:
Connecting to ZooKeeper at 192.168.56.5,192.168.56.6,192.168.56.7/solr ...
Re-using existing configuration directory statdx
Creating new collection 'fooCollection' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=fooCollection&numShards=1&replicationFactor=1&maxShardsPerNode=1&collection.configName=fooCollection
{
"responseHeader":{
"status":0,
"QTime":3822},
"success":{"":{
"responseHeader":{
"status":0,
"QTime":3640},
"core":"fooCollection_shard1_replica1"}}}
Everything is working great. I turn on my microservice, and it pumps all my SOLR docs from Kafka into 'solr5'.
Now, I want to add 'solr6' to the collection. I can't find a way to do this besides my hack (which I'll describe later).
The command I used before to create a collection, errors out with the observation that my collection already exists.
There seems to be no zkcli.sh or solr command that will do what I want. None of the api commands seem to do this either.
Is there not a simple way to say to (SOLR? Zookeeper?) I want to add another machine to my SOLR nodes, please configure it like the first (solr5) and begin replicating data?
Maybe I should have had both machines running when I issued the create command?
I'd be grateful for some "approved" method for doing this since I need to come up with a "solution" to do the same kind of approach in Prod every time there is a need to upgrade SOLR.
Now for my hack. Keep in mind I'm now two days trying to find clear docs on this. No flames please, I totally get that this is not the way to do things. At least, I HOPE this is not the way to do things...
Copy the fooCollection directory from where the create collection
command put it on 'solr5' (which was
/opt/solr/server/solr/fooCollection_shard1_replica1) to the same
location on my 'solr6' VM.
Make what changes seem logical to the collection directory name (becomes
fooCollection_shard1_replica2)
Make what changes seem logical in the core.properties file:
For reference, here's the core.properties file that was created by the create command.
#Written by CorePropertiesLocator
#Wed Jan 20 18:59:08 UTC 2016
numShards=1
name=fooCollection_shard1_replica1
shard=shard1
collection=fooCollection
coreNodeName=core_node1
Here is what the file looked like on 'solr6' when I was done hacking.
#Written by CorePropertiesLocator
#Wed Jan 20 18:59:08 UTC 2016
numShards=1
name=fooCollection_shard1_replica2
shard=shard1
collection=fooCollection
coreNodeName=core_node2
When I did this and rebooted 'solr6' everything appeared golden. The "Cloud" web page looked right in the Admin web page - and when I added documents to 'solr5' they were available in 'solr6' if I hit it directly from the Admin web pages.
I would be grateful if someone can tell me how to achieve this without a hack like this... or if this IS the right way to do this...
=============================
In answer to #Mani and the suggested procedure
Thanks Mani - I did try this very carefully following your steps.
In the end, I get this output from the collection status query:
john#solr6:/opt/solr$ ./bin/solr healthcheck -z 192.168.56.5,192.168.56.6,192.168.56.7/solr5_4 -c fooCollection
{
"collection":"fooCollection",
"status":"healthy",
"numDocs":0,
"numShards":1,
"shards":[{
"shard":"shard1",
"status":"healthy",
"replicas":[{
"name":"core_node1",
"url":"http://192.168.56.15:8983/solr/fooCollection_shard1_replica1/",
"numDocs":0,
"status":"active",
"uptime":"0 days, 0 hours, 6 minutes, 24 seconds",
"memory":"31 MB (%6.3) of 490.7 MB",
"leader":true}]}]}
This is the kind of result I've been finding in my experimentation all along. The core will get created on one of the SOLR VM's (the one I issue the command line to create the collection on) but I don't get anything created on the other VM -- which, based on your steps below, I believe you also thought should occur, yes?
Also, I'll note for anyone reading that in 5.4, the command is "healthcheck" and not healthstatus. The command line shows you immediately, so it's no big deal.
===============
Update 1 :: Manual add of 2nd core
If I go to the other VM and manually add the following:
sudo mkdir /opt/solr/server/solr/fooCollection_shard1_replica2
sudo mkdir /opt/solr/server/solr/fooCollection_shard1_replica2/data
nano /opt/solr/server/solr/fooCollection_shard1_replica2/core.properties
(in here I add only collection=fooCollection and then save/close)
Then I reboot my SOLR server on that same VM:
sudo /opt/solr/bin/solr restart -c -z zoo1,zoo2,zoo3/solr
I will find a second node magically appearing in my Admin console. It will be a "follower" (I.E. not the leader) and both will be branching off "shard1" in the cloud UI.
I don't know if this is "the way" but it's the only way I've found so far. I'm going to reproduce to that point and try with the Admin UI and see what I get. That would be a little easier for my IT guys when the time comes - if it works.
===============
Update 2 :: Slight modification of create command
#Mani -- I believe I have success following your steps - and like many things, it's simple once you understand.
I reset everything (deleted directories, cleared out zookeeper (rmr /solr) and re did everything from scratch.
I changed the "create" command slightly thus:
./bin/solr create -c fooCollection -d /home/john/conf -shards 1 -replicationFactor 2
Note the "replicationFactor 2" rather than 1.
Suddenly I did indeed have cores on both VMs.
A couple of notes:
I found that I couldn't get a happy result from the status call just by starting the SOLR 5.4 servers in Cloud mode with the Zookeeper IP addresses. The "node" in Zookeeper was not yet created.
The create command also failed at that point.
The way I found around this was to use the zkcli.sh to load the configs like this:
sudo /opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd upconfig -confdir /home/john/conf/ -confname fooCollection -z 192.168.56.5/solr
When I checked Zookeeper immediately after running this command, there was a /solr/configs/fooCollection "path".
NOW the create command works and I assume that if I had wanted to override the configs, I could have done so at that point although I haven't tried.
I'm not positive at what point, but it seems I needed to reboot the SOLR Servers (probably after the create command) in order to find everything on status etc... I may be misremembering that because I've been through it so many times. If in doubt after the create command, try a reboot of the servers. (This can be IP addresses or names that resolve correctly)
sudo /opt/solr/bin/solr restart -c -z zoo1,zoo2,zoo3/solr
sudo /opt/solr/bin/solr restart -c -z 192.168.56.5,192.168.56.6,192.168.56.7/solr
After doing these slight modifications to #Mani's recommended procedure, I get a Leader and a "follower" each on different VM's - in the /opt/solr/server/solr directory (fooCollection in this case) and I was able to send data in to one and search the other via the Admin console hitting the IP addresses.
=============
Variations
One thing anyone reading this may want to try is simply making another "node" in Zookeeper (solr5_4 for example).
I tried this and it works like a charm. Everywhere you see the /solr chroot associated with the Zookeeper ensemble, you could replace it with /solr5_4. This would allow the older SOLR VM's to keep functioning in Prod while you build out your new SOLR 5.4 "environment" and the same Zookeeper VM's could be used for both -- because a different chroot should guarantee no interaction or overlap.
Again, the "node" in Zookeeper won't be created until you do the config upload, but you need to start your SOLR process like this or you'd be in the wrong context later on. Note the "solr5_4" as the chroot.
sudo /opt/solr/bin/solr restart -c -z zoo1,zoo2,zoo3/solr5_4
Once done with testing, the solr5_4 "environment" becomes what matters for Prod and the SOLR 4.x VM's and Zookeeper "node" of solr can be removed. It should be a fairly simple matter to point a load balancer at the new SOLR VM's and do a switchover without users really even noticing.
This strategy will work for SOLR 6, 6.5, 7, and so on.
This command also worked to add the collections/cores. However, the solr server had to be running first.
http://192.168.56.16:8983/solr/admin/collections?action=CREATE&name=fooCollection&numShards=1&replicationFactor=2&collection.configName=fooCollection
==================
Use as Upgrade By Replacement
In case it's not obvious, this technique (especially if using the "new" chroot in Zookeeper of something like /solr5_4 or similar) gives you the luxury of leaving your older version of SOLR running for as long as you want. Allowing a re-indexing of all your data to take days if needed.
I haven't tried, but I'm guessing a backup of the index could be dropped into the new machines as well.
I just wanted readers to understand that this was an approach intended to make upgrades really low stress and straightforward. (Don't need to upgrade in place, just build new VMs and install latest version of SOLR.)
This would allow the switch-over to occur without affecting prod until you're ready to drop the hammer and re-direct your load balancer at the new SOLR ip addresses (Which you will have already tested of course...)
The one assumption here is that you have the resources to bring up a set of SOLR VMs or physical servers to match whatever you already have in Production. Obviously, if you're resource-limited to only the boxes or VMs you have, upgrade-in-place may be your only option.
This is how I would do it. I am assuming that you have the luxury of having downtime & have ability to completely reindex the documents. Since you are essentially upgrading from 4.9 to 5.4.
Stop the 4.9 solr nodes and uninstall solr.
Remove the config from zk nodes using zkcli.sh with the clear command.
Install the solr on both solr5 & solr6 vm
Start both the solr nodes and make sure both can talk to zk. =>
On solr5 vm ./bin/solr start -c -z zk1:port1,zk2:port1,zk3:port1
On solr6 vm ./bin/solr start -c -z zk1:port1,zk2:port1,zk3:port1
Verify the status of Solrcloud using ./bin/solr status => this should return liveNodes as 2
Now create the fooCollection using the CollectionsAPI from anyone of solr nodes. This uploads the configsets to zookeeper and also creates the collection =>
./bin/solr create -c fooCollection -d /home/john/conf -shards 1 -replicationFactor 1
Verify the healthstatus of the fooCollection =>
./bin/solr healthstatus -z zk1:port1,zk2:port1,zk3:port1 -c fooCollection
Now verify the config is present in Zookeeper by checking Solr-AdminConsole -> CloudSection -> Tree .. /configs
And also check the CloudSection -> Graph showing the active status on the nodes. That indicates that everything is good.
Now start pushing documents into the collection
The below wiki is very helpful to do the above.
https://cwiki.apache.org/confluence/display/solr/Solr+Start+Script+Reference

Installing MySql service with --defaults-file option does not install service

I am trying to install a service using an explicit defaults file as described here http://dev.mysql.com/doc/refman/4.1/en/mysql-config-wizard-file-location.html
The command I am running is:
mysqld --install "AwesomeMysqlService" --defaults-file="C:\my.ini"
If I run this I get a warning and the service is not installed:
2014-04-04 12:01:17 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. enter code here`Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
I have tried adding:
--explicit_defaults_for_timestamp=1 and that gets rid of the warning, however the service is still not installed.
No further output is produced even when running with --debug
If I just install the service without the defaults file I get back:
Service successfully installed.
I dont understand why this is not working. I have recently upgraded MySQL to 5.6. so I wonder if this has something to do with it.
I am probably doing something silly :P
Thanks
J
It turns out that mysqld only allows you to specify one argument if you use a service name.
e.g. if you just use the default instance name (MySql) you can add as many arguments as you want.
mysqld --install --port=3100 --datadir="C:\Data" etc
However if you give your instance a name, you can only specify 1 additional argument:
mysqld --install fooSql --defaults-file=my.ini

MacPorts Apache2 Stopped Launching on Boot

Something that I've noticed recently on two different machines is that Apache2 installed via MacPorts seems to have stopped launching when I boot up. The honest truth is that I can't swear it did so before, but it's something I think I'd notice because installing the LaunchDaemon is part of my install process. In fact, if I try to reload the LaunchDaemon, it fails:
$ sudo launchctl load -w /Library/LaunchDaemons/org.macports.apache2.plist
org.macports.apache2: Already loaded
Unless I start Apache manually (using sudo apachectl restart), grep'ing for either "apache2" or "httpd" in my process list only produces this:
$ sudo ps -ef | egrep "apache2|httpd"
0 52 1 0 0:00.06 ?? 0:00.08 /opt/local/bin/daemondo --label=apache2 --start-cmd /opt/local/etc/LaunchDaemons/org.macports.apache2/apache2.wrapper start ; --stop-cmd /opt/local/etc/LaunchDaemons/org.macports.apache2/apache2.wrapper stop ; --restart-cmd /opt/local/etc/LaunchDaemons/org.macports.apache2/apache2.wrapper restart ; --pid=none
1410639199 6960 6792 0 0:00.00 ttys001 0:00.00 egrep apache2|httpd
Looks like the daemon itself is in place, but no executable. As far as I know/can tell, the relevant executables (httpd and apachectl) are executable by everyone.
Has anyone else noticed this? Any ideas?
UPDATE
As requested below, I did execute launchctl list. The list is long and I'm not sure how to snip it, but suffice to say that no org.macports.* items are listed. That in itself is interesting because my MySQL daemon is loaded the same way. It works, but also doesn't appear in the list. Let me know if the entire output is really needed.
UPDATE
I assumed that I had executed launchctl list under sudo, but prompted by mipadi's comment below, I tried again ensuring that I did so and I assumed incorrectly. When executed under sudo, the MacPorts items appear:
51 - org.macports.mysql5
52 - org.macports.apache2
I'm not sure whether that will help, but it's a little more info nonetheless.
UPDATE
I've asked a different, but related, question at LaunchDaemons and Environment Variables. I'll update both questions as I learn more.
UPDATE
Today, based on mailing list input, I tried using a wildcard home directory. Academically, it's a little more inclusive than I'd like, but the practical reality is that I'm the only one using this computer; certainly the only one who'd have Apache config files laying around.
Include "/Users/*/Dropbox/Application Support/apache/conf.d.osx/*.conf"
Include "/Users/*/Library/Application Support/MacPorts/apache/conf.d/*.conf"
Unfortunately...
httpd: Syntax error on line 512 of /opt/local/apache2/conf/httpd.conf: Wildcard patterns not allowed in Include /Users/*/Dropbox/Application Support/apache/conf.d.osx/*.conf
I found my answer to this problem here:
https://trac.macports.org/ticket/36101
"I apparently fixed this when changing my local dnsmasq config. In /etc/hosts I added my servername (gala) to the loopback entry:
127.0.0.1 localhost gala
and then I changed ServerName in /opt/local/apache2/conf/httpd.conf to match:
ServerName gala
Apache now starts at boot for me."
Since I now know why Apache has stopped loading on startup, I'm going to articulate that answer and mark this question as answered. The reason Apache has stopped launching on boot is that I'm trying to share an httpd.conf file across systems. The config file needs to Include files from directories that exist within my home directory. Since the home directory is different on each machine, I was trying to reference the ${HOME} environment variable.
This works fine when manually starting after the machine is booted, but fails on startup because the environment variable isn't yet set. As mentioned above, see this question for more information.
Rob:
Had the same problem: "sudo launchctl load -w ..." started Apache2 while I was logged in, but did not work during startup (the "-w" should have taken care of that). Also, as you noticed, the daemon seems to be registered with launchctl. It will show up with "sudo launchctl list" and another "sudo launchctl load ..." will result in the error message.
I played with "sudo port load apache2" and "sudo port unload apache2", but could not get httpd running on reboot.
In the end, I got rid of the MacPorts startup item: "sudo port unload apache2", checked with "sudo launchctl list" that org.macports.apache2 is no longer registered for startup.
Afterwards, I followed the steps on http://diymacserver.com > Docs > Tiger > Starting Apache. I only had to adapt the path from /usr/local/... to /opt/local/...
Now the MacPorts Apache2 is starting fine with every reboot.
Good luck, Klaus
I found that my MacPorts apache2 was not starting on boot because of an “error” in my httpd.conf.
I was using
Listen 127.0.0.1:80
Listen 192.168.2.1:80
Listen 123.123.123.123:80 # Example IP, not the one I was really using
And in Console.app I was seeing
4/8/12 4:59:06.208 PM org.macports.apache2: (49)Can't assign requested address: make_sock: could not bind to address 192.168.2.1:80
4/8/12 4:59:06.208 PM org.macports.apache2: no listening sockets available, shutting down
4/8/12 4:59:06.208 PM org.macports.apache2: Unable to open logs
I tried adjusting permissions on all the log folders (despite the fact that logs were being written just fine when I manually started apache2) and that didn't help.
Even though the Apache Documentation for Listen clearly states
Multiple Listen directives may be used to specify a number of addresses and ports to listen to. The server will respond to requests from any of the listed addresses and ports.
I decided to try switching back to just using
Listen 80
And after doing so apache2 is starting on boot with no errors or warnings.
If you're using Subversion with Apache, you may find that Apache is not starting because the mod_dav_svn.so file has moved to /opt/local/libexec. You'll need to adjust your Apache startup files to account for the new location of this file.
In newer versions of MacPorts you can run sudo port load apache2 to instruct MacPorts to take care of the launchctl setup and automatically start the process. To stop the process run port unload.
After loading check /opt/local/apache2/logs/error_log for errors, including configuration issues.
In addition to my previous answer I have also found that sometimes Apache fails to start because something else with the system is not yet ready.
On one OS X Server machine I also use the DNS to create a “internal only” DNS name for the machine and that name is used in my Apache configuration. Sometimes when Apache tries to start the DNS server is not yet ready and Apache fails to load because the hostname isn’t valid.
I have also seen this on other non-Server systems without local DNS as well where something else required by Apache must not be ready yet.
One thing that has worked is to edit the apache2.wrapper located at /opt/local/etc/LaunchDaemons/org.macports.apache2/apache2.wrapper that MacPorts’ daemondo uses to start up Apache.
Edit the Start() function to add a sleep command to wait a bit before launching Apache.
Original (Lines 14-17 on my machine)
Start()
{
[ -x /opt/local/apache2/bin/apachectl ] && /opt/local/apache2/bin/apachectl start > /dev/null
}
With wait time added
Start()
{
[ -x /opt/local/apache2/bin/apachectl ] && sleep 10 && /opt/local/apache2/bin/apachectl start > /dev/null
}

Resources