I'm in the middle of configuring SolrCloud with Zookeeper but I struggle to load the config on ZK.
Here my steps:
Configure an ensemble of 3 ZK, I see 1 leader and 2 follower
Configure a small cluster of 2 of SolrCloud that is started as followed
/bin/solr start -c -z <ip1>:2181,<ip2>:2181,<ip3>:2181 -noprompt
Then I tried to load the config on ZK using zkCli.sh:
./bin/zkCli.sh -zkhost <ip1>:2181,<ip2>:2181,<ip3>:2181 -cmd upconfig -confname config1 -confdir /folder/with/schema.xml (it come from solr standalone)
Create Solr collection using API
http://<solr_ip>:8983/solr/admin/collections?action=CREATE&name=collection_test&numShards=2&replicationFactor=2&maxShardsPerNode=2
Link the config to the collection using again zkCli.sh
./bin/zkCli.sh -zkhost 127.0.0.1:2181 -cmd linkconfig -collection collection_test -confname config1
At this point I should see the config loaded but nothing happens.
I used below steps to configure SolrCloud in my VM.
SolrCloud Setup Instructions
Infrastructure
a. Unix Boxes 3
b. ELB 1
c. create cnames for the unix boxes as cl-solr1, cl-solr2, cl-solr3
Installations
a. Install zookeeper-3.4.6 (zookeeper-3.4.6.tar.gz)
b. solr-5.2.1 (solr-5.2.1.tgz)
c. OpenJDK Runtime Environment 1.7.0_79
Setup
a. Set JAVA_HOME
b. In cl-solr1,cl-solr2,cl-solr3 create zoo.cfg file with below content at /opt/myname/zookeeper-3.4.6/conf
tickTime=2000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=5
syncLimit=2
server.1=cl-solr1:2888:3888
server.2=cl-solr2:2888:3888
server.3=cl-solr3:2888:3888
c. Create myid file for each zookeeper server cl-solr1, cl-solr2 & cl-solr3 using below command
$mkdir -p /var/lib/zookeeper/data/
$echo 1 > /var/lib/zookeeper/data/myid --1 for cl-solr1 and 2 for cl-solr2 ..
Start the zookeeper
a. /opt/myname/zookeeper-3.4.6/bin/zkServer.sh start
b. /opt/myname/zookeeper-3.4.6/bin/zkServer.sh status
c. Status check in detail via
echo stat | nc cl-solr1 2181
Start the SOLR
a. cl-solr1$ /opt/myname/solr-5.2.1/bin/solr start -c -z cl-solr1:2181,cl-solr2:2181,cl-solr3:2181 -h cl-solr1
b. cl-solr2$ /opt/myname/solr-5.2.1/bin/solr start start -c -z cl-solr1:2181,cl-solr2:2181,cl-solr3:2181 -h cl-solr2
c. cl-solr3$ /opt/myname/solr-5.2.1/bin/solr start -c -z cl-solr1:2181,cl-solr2:2181,cl-solr3:2181 -h cl-solr3
Create a new Collection
a. From one of the nodes (cl-solr1) fire below commands
i. mkdir -p /opt/myname/solr-5.2.1/server/solr/pats/conf
ii. Copy conf folder from current system
iii. /opt/myname/solr-5.2.1/bin/solr create -c my_colln_name -d /opt/myname/solr-5.2.1/server/solr/pats/conf -n myname_cfg -shards 2 -replicationFactor 2
Related
I tried to run my first cluster as I'm currently trying to learn so I can work in Cloud Engineering hopefully.
What I did :
I have 3 Cloud Servers ( Ubuntu 20.04), all in one Network,
I've successfully set up my ETCD Cluster ( cluster-health shows me all 3 Network IPs of the Servers, 1 leader 2 not leader)
now I've installed k3s on my first Server
curl -sfL https://get.k3s.io | sh -s - server \ --datastore-endpoint="https://10.0.0.2:2380,https://10.0.0.4:2380,https://10.0.0.3:2380"
I've done the same on the 2 other Servers the only difference is I added the token value to it and checked it beforehand in:
cat /var/lib/rancher/k3s/server/token
now everything seems to have worked but when I tried to kubectl get nodes , it just shows me one node...
does anyone have any tips or answers for me?
k3s Service FIle :
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
server \
'--node-external-ip=78.46.241.153'
'--node-name=node-1'
' --flannel-iface=ens10'
' --advertise-address=10.0.0.2'
' --node-ip=10.0.0.2'
' --datastore-endpoint=https://10.0.0.2:2380,https://10.0.0.4:2380,https://10.0.0.3:2380' \
I try to write a wrapper script for songbook generation using lilypond, latex and sejda-console (for the pdf part). Everything works so far, but I have a problem with sejda that is giving me nuts.
Here is the relevant part of my code:
for %%i in (%f%) do (
sejda-console.bat extractbybookmarks -f ".\%%~ni.pdf" -o "export\%%~ni\%title%.pdf" -l 2 -p [BOOKMARK_NAME] -e "%title%" --overwrite
)
where f is a ";"-separated list of files. This command works for the first file, but fails for all others. I can't find any difference between the commands that sejda receives. Here is my console output:
make_sheet.bat -t "Live it up" --supress *.lytex
Configuring Sejda 3.2.30
Starting execution with arguments: 'extractbybookmarks -f .\book_drums.pdf -o export\book_drums\Live it up.pdf -l 2 -p [BOOKMARK_NAME] -e Live it up --overwrite'
Java version: '1.8.0_151'
Validating parameters.
Starting task (org.sejda.impl.sambox.ExtractByOutlineTask#28701274) execution.
Opening C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\.\book_drums.pdf
Retrieving outline information for level 2 and match regex Live it up
Starting extraction by outline, level 2 and match regex Live it up
Found 0 inherited images and 0 inherited fonts potentially unused
Starting extracting Live it up pages 9 9
Created output temporary buffer C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\.sejdaTmp2789047920522272436.tmp
Appended relevant outline items
Filtering annotations
Skipped acroform merge, nothing to merge
Ending extracting Live it up
Task progress: 0% done
Moving C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\.sejdaTmp2789047920522272436.tmp to C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\Live it up.pdf.
Extraction completed and outputs written to org.sejda.model.output.FileOrDirectoryTaskOutput#478190fc[C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\Live it up.pdf]
Task (org.sejda.impl.sambox.ExtractByOutlineTask#28701274) executed in 0 seconds
Completed execution
C:\Users\skr1_\Desktop\Tools\Songbook\Sample>(sejda-console.bat extractbybookmarks -f ".\book_general.pdf" -o "export\book_general\Live it up.pdf" -l 2 -p [BOOKMARK_NAME] -e "Live it up" --overwrite )
Configuring Sejda 3.2.30
Starting execution with arguments: 'extractbybookmarks -f .\book_general.pdf -o export\book_general\Live it up.pdf -l 2 -p [BOOKMARK_NAME] -e Live it up --overwrite'
Java version: '1.8.0_151'
Invalid value (File '.\book_general.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
Invalid value (File '.\book_general.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
C:\Users\skr1_\Desktop\Tools\Songbook\Sample>(sejda-console.bat extractbybookmarks -f ".\book_guitar.pdf" -o "export\book_guitar\Live it up.pdf" -l 2 -p [BOOKMARK_NAME] -e "Live it up" --overwrite )
Configuring Sejda 3.2.30
Starting execution with arguments: 'extractbybookmarks -f .\book_guitar.pdf -o export\book_guitar\Live it up.pdf -l 2 -p [BOOKMARK_NAME] -e Live it up --overwrite'
Java version: '1.8.0_151'
Invalid value (File '.\book_guitar.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
Invalid value (File '.\book_guitar.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
Even worse, if I copy the commands that sejda receives and paste them as arguments for a new command, everything works fine.
I suspect that something is happening with the working directory in between, but I don't get it.
Also, note that the output includes the command for subsequent passes of the for-loop (starting with "(sejda-console.bat ...") though echo is off. It is not included for the first run, however.
I'm not an expert with programming, especially not with batch, and any help would be very appreciated.
I'd suggest that sejda.bat is changing the current directory.
Try
pushd
call sejda.bat ...
popd
I am having difficulty getting more than one graph worth of perfdata to display using pnp4nagios with naemon.
It appears the format is not what Nagios is expecting to pass to pnp4nagios.
I think I have the most recent version of check_int.pl:
Program : check_netint.pl or check_snmp_netint.pl
Version : 2.4 alpha 9
Date : Nov 30, 2012
The perfdata format looks ok. In others that work with multiple graphs there seem to be an equivalent number of entries on the left side of the "|" as the right side. I've also tried entering in
-w and -c values so the ;;; below are filled with those. This does not seem to make a difference.
I'd like to get 4 graphs from the below return data, in/out for each of the two interfaces.
Any ideas?
% /usr/lib64/naemon/plugins//check_int.pl -H 10.61.146.227 -C sparkred -2 -n bond0 -w -z -f -e -S -k -Y -B
bond0:UP (1717.9Kbps/3854.1Kbps), bond0.1422:UP (19.0Kbps/326.0Kbps) (2 UP): OK | 'bond0_in_bps'=1717881;;; 'bond0_out_bps'=3854092;;; 'bond0.1422_in_bps'=19023;;; 'bond0.1422_out_bps'=325999;;;
Thanks!
I think you're missing the pnp4nagios template written specifically for this particular Nagios plugin.
Try downloading:
https://raw.githubusercontent.com/willixix/WL-NagiosPlugins/master/graphing_templates/pnp4nagios/check_snmp_network_interface_linux.php
and placing it in your pnp4nagios templates folder.
On my system, this path ends up being:
/usr/local/pnp4nagios/share/templates/check_snmp_network_interface_linux.php
Then restart the nagios service.
Here's more information about pnp4nagios templates:
http://docs.pnp4nagios.org/pnp-0.6/tpl
This is the site of the Nagios plugin's author:
http://william.leibzon.org/nagios/
Is it possible to mass rename objects on Google Cloud Storage using gsutil (or some other tool)? I am trying to figure out a way to rename a bunch of images from *.JPG to *.jpg.
Here is a native way to do this in bash with an explanation below, line by line of the code:
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /' | while read line; do bash -c "$line"; done
rm src-rename-list.txt; rm dest-rename-list.txt
The solution pushes 2 lists, one for the source and one for the destination file (to be used in the "gsutil mv" command):
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
The line "gsutil mv " and the two files are concatenated line by line using the below code:
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /'
This then runs each line in a while loop:
while read line; do bash -c "$line"; done
Lastly, clean up and delete the files created:
rm src-rename-list.txt; rm dest-rename-list.txt
The above has been tested against a working Google Storage bucket.
https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames
gsutil supports URI wildcards
EDIT
gsutil 3.0 release note
As part of the bucket sub-directory support we changed the * wildcard to match only up to directory boundaries, and introduced the new ** wildcard...
Do you have directories under bucket? if so, maybe you need to go down to each directories or use **.
gsutil -m mv gs://my_bucket/**.JPG gs://my_bucket/**.jpg
or
gsutil -m mv gs://my_bucket/mydir/*.JPG gs://my_bucket/mydir/*.jpg
EDIT
gsutil doesn't support wildcard for destination so far (as of 4/12/'14)
nether API.
so at this moment you need to retrieve list of all JPG files,
and rename each files.
python example:
import subprocess
files = subprocess.check_output("gsutil ls gs://my_bucket/*.JPG",shell=True)
files = files.split("\n")[:-1]
for f in files:
subprocess.call("gsutil mv %s %s"%(f,f[:-3]+"jpg"),shell=True)
please note that this would take hours.
gsutil does not support parallelized and mass-copy/rename.
You have two options:
use a dataflow process to do the operation
or
use GNU parallel to launch it using several processes
If you use GNU Parallel, it is better to deploy a new instance to do the mass copy/rename operation:
First: - Make a list of files you want to copy/rename (a file with source and destination separated by a space or tab), like this:
gs://origin_bucket/path/file gs://dest_bucket/new_path/new_filename
Second: Launch a new compute instance
Third: Login in that instance and install Gnu parallel
sudo apt install parallel
Third: authorize yourself with google (gcloud auth login) because the service account for compute might not have permissions to move/rename the files.
gcloud auth login
Make the copy (gsutil cp) or move (gsutil mv) operation with parallel:
parallel -j 20 --colsep ' ' gsutil mv {1} {2} :::: file_with_source_destination_uris.txt
This will make 20 parallel runs of the gsutil cp operation.
Yes, it is possible:
Move/rename objects and/or subdirectories
I've deployed a new instance of Nagios on a fresh install of CentOS 7 via the EPEL repository. So the Nagios Core version is 3.5.1.
After installing nagios and nagios-plugins-all (via yum), I've created a number of hosts and service definitions, have tested my configuration with nagios -v /etc/nagios/nagios.cfg, and have Nagios up and running!
Unfortunately, my host checks are failing (although my service checks are working perfectly fine).
Within the Nagios Web GUI / Dashboard, if I drill down into a Host page with the "Host State Information", I see this being reported for "Status Information" (IP address removed):
Status Information: /usr/bin/ping -n -U -w 30 -c 5 {my-host-ip-address}
CRITICAL - Could not interpret output from ping command
So in my troubleshooting, I drilled down into the Nagios Plugins directory (/usr/lib64/nagios/plugins), and ran a test with the check_ping plugin consistent with the way check-host-alive runs the command (see below for my check-host-alive command definition):
./check_ping -H {my-ip-address} -w 3000.0,80% -c 5000.0,100% -p 5
This check_ping command returns the following output:
PING OK - Packet loss = 0%, RTA = 0.63
ms|rta=0.627000ms;3000.000000;5000.000000;0.000000 pl=0%;80;100;0
I haven't changed the definition of how check_ping works, and can confirm that I'm getting a "PING OK" whenever the command is run the same way that check-host-alive runs the command, so I cannot figure out what's going on!
Below are the command definitions for check-host-alive as well as check_ping.
# 'check-host-alive' command definition
define command{
command_name check-host-alive
command_line $USER1$/check_ping -H $HOSTADDRESS$ -w 3000.0,80% -c 5000.0,100% -p 5
}
{snip}
# 'check_ping' command definition
define command{
command_name check_ping
command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5
}
Any suggestions on how I can fix my check-host-alive command definition to work properly and evaluate the output of check_ping properly?
Edit
Below is the full define host {} template I'm using:
define host {
host_name myers ; The name of this host template
alias Myers
address [redacted]
check_command check-host-alive
contact_groups admins
notifications_enabled 0 ; Host notifications are enabled
event_handler_enabled 1 ; Host event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
failure_prediction_enabled 1 ; Failure prediction is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ; Retain status information across program restarts
retain_nonstatus_information 1 ; Retain non-status information across program restarts
notification_period 24x7 ; Send host notifications at any time
register 1
max_check_attempts 2
}
For anyone else who runs into this issue, there's another option than changing permissions on ping. Simply change the host check command to use check_host rather than check_ping. While there are certainly some differences in the functionality, the overall end result is the same.
There are those who will say this isn't a good option because of the ability to range the check_ping command, but it should be remembered that host checks aren't even executed until all service checks for a given host have failed. Anyway, if you're interested in testing throughput, there are MUCH better ways of going about it than relying on ICMP, which is the lowest priority traffic type on a network.
I'm sure the OP is well on to other things by now, but hopefully someone else who has this issue will benefit.
I could not found the ping on /usr/bin/ping
# chmod u+s /bin/ping
# ls -al /bin/ping
-rwsr-xr-x 1 root root 40760 Sep 26 2013 /bin/ping*
Finally run the below command,
/usr/local/nagios/libexec/check_ping -H 127.0.0.1 -w 100.0,20% -c 500.0,60% -p 5
I was fairly certain that running chmod U+s /usr/bin/ping would solve the issue, but I was (and still am) wary about chmod'ing system files. It seems to me that there has to be a safer way to do it.
However, in the end, that's what I did - and it works. I don't like it, from a security standpoint.
I also had same problem and the above answers did not work for me. After some checking the issue further noticed that the reason is IP protocol. once I passed the correct IP protocol , It worked fine.
/usr/local/nagios/libexec/check_ping -H localhost -w 3000.0,80% -c 5000.0,100% -4
output
PING OK - Packet loss = 0%, RTA = 0.05 ms|rta=0.051000ms;3000.000000;5000.000000;0.000000 pl=0%;80;100;0
By default It's getting IPv6.
/usr/local/nagios/libexec/check_ping -H localhost -w 3000.0,80% -c 5000.0,100% -6
output
/sbin/ping6 -n -U -W 30 -c 5 localhost
CRITICAL - Could not interpret output from ping command
But when integrating with Nagios server, I could not able to pass this value as an argument. Therefore I have done below workaround in client side nrpe.cfg file
command[check_ping_args]=/usr/local/nagios/libexec/check_ping -H $ARG1$ -w $ARG2$ -c $ARG3$ -4
Here Host, warning and critical thresholds were passing by Nagios host as below,
define service{
use generic-service
hostgroup_name all-servers
service_description Host Ping Status
check_command check_nrpe_args!check_ping_args!localhost!3000.0,80%!5000.0,100%
}