SOLR full-import not working when running using lynx command - solr

I want to setup a cron in Amazon EC2 Linux to run a SOLR full-import at 12:15AM every night.
Before I setup the cron I tried testing in the terminal whether it is working or not. I used below command to test
/usr/bin/lynx http://amzon-instance-ip:8983/solr/work/dataimport?command=full-import
Output of the command:
[1] 15153
But when I go to below url to check whether the full-import actully initiated. I see the full-import command is not running.
http://amzon-instance-ip:8983/solr/#/workb/dataimport//dataimport
Anyone can help me why the SOLR full-import not running with lynx command? Am I using lynx correctly or do I need to use a differnt approach? Any Suggestions please.

I spent some time on internet searching the solution for why a url not working with lynx but could not find the solution.
Thanks for #Oyeme suggestion, I got two ways to get my URL running using linux curl and wget commands.
Using linux curl command:
curl -s ' http://amzon-instance-ip:8983/solr/work/dataimport?command=full-import&clean=false' > /dev/null
Using linux wget command:
wget -O /dev/null ' http://amzon-instance-ip:8983/solr/work/dataimport?command=full-import&clean=false'

Related

Why does starting Solr with multiple Zookeeper IPs fail?

I'm trying to set up 3 Solr (8.4.0) servers with a Zookeeper (3.7.0) ensemble on Windows Server 2019. Each server has one Solr instance and one Zookeeper installed. The problem I'm facing is that I'm getting an error when trying to start Solr pointing to multiple Zookeeper Ips:
.\solr start -c -z "172.29.70.47:2181,172.29.70.48:2181"
Console output:
Invalid command-line option: 172.29.70.48:2181
I have tried various combinations of this command with or without quotes, with or without ports etc but it fails every time. If I only specify one Zookeeper IP and port the command runs fine. As soon as I specify more than one IP it fails.
I've tried setting ZK_HOST in solr.in.cmd but it also fails to start. Even in the docs (https://solr.apache.org/guide/8_4/setting-up-an-external-zookeeper-ensemble.html#using-the-z-parameter-with-binsolr) it shows that configuring multiple IPs should be possible using the -z parameter.
What am I missing?
Thanks to MatsLindh I was able to figure out what the issue was. When using Powershell the double quotes need to be wrapped in single quotes so the command should look like:
.\solr start -c -z '"172.29.70.47:2181,172.29.70.48:2181,172.29.70.49:2181"'
Using Command Prompt in windows double quotes work as expected and the command should be:
solr start -c -z "172.29.70.47:2181,172.29.70.48:2181,172.29.70.49:2181"

Does Solr have a folder watcher ?

I am using Solr 7.2 to index 'document files' using Post.
However, i want this to rerun every time there is a change to the document folder.
So i am using Jenkins with Folder Watcher trigger (FSTrigger) which calls the POST to re-index like this :-
/opt/solr/bin/solr delete -c resumes
sudo -u solr /opt/solr/bin/solr create -c resumes -d /opt/solr/example/files/conf
/opt/solr/bin/post -c resumes /home/chak/Documents
Is there a folder watcher in Solr itself, so i can avoid using Jenkins ?
No, Solr does not have any watch capabilities - seeing as it's also meant to be running as a cluster on multiple server, I'm pretty sure that's functionality that would be considered to be external to Solr (possibly integrated into the post tool if any).
That being said, you don't have to use something as complex as Jenkins to implement that. Using inotifywait you could implement the same functionality with a couple of lines of bash.

How to add a new machine to an already running solrCloud?

So, I have two instance of solr node running along with a embedded zookeeper on a single machine using the link Set up solrCloud. Now I want to add a new machine to this cluster. I run bin\solr start -cloud -s ./solr -h newMachineIP -p 9000 -z oldMachineIP:9983. It shows successful startup, but when I create a new collection it gives me an error saying "Server refused connection at: http://newMachineIp:9000/solr"
just a guess but... does C:\path\to\di‌​r\solr-7.1.0\solr-7.‌​1.0\server\solr\gett‌​ingstarted contain any spaces? If so, install Solr into a path with no spaces, this has been an issue before in Windows, and it's possible it still is in some code paths. Solr on Windows get much less testing than on linux.

On Linux CentOS 7 OS how schedule jobs to run drush to re-index solr, run cron and clear cache for drupal 7 sites

HI a newbie to server management... We need to automate hourly script to run drush to re-index solr, run cron and clear cache on multiple servers. I'm sure there has to be .bat file or something?
At first: You have no '.bat' files in any unix system (but ofcourse you can write scripts named something.bat, but nothing special happens ;-)).
You need to have drush installed somewhere in your sytem. I use to install it in /usr/local/share/drush and put a link from /usr/local/share/drush/drushto /usr/local/bin/drush. Then run crontab -e to edit the schedule. An editor is launched inside your console window. If it shows an empty window or the file contains only lines starting with "#", then put
MAILTO=your.mail#example.com
PATH=/usr/local/bin:/bin:/usr/bin:$HOME/bin
#daily drush #live -q -y cron > /dev/null
In this case drush cron is executed daily for a drupal installation with a site-alias #live. The output is sent to /dev/null, so that I do not get any error messages.
PS: Get familiar with the cron systems and the crontab command as well as shell scripting. They are unix' standard tools and are needed for those kind of tasks.
PS2: You also want to know the concept of drush site-aliases. Run drush topic docs-aliases to learn more about it.

Cannot start HBase start_hbase.sh: command not found

Trying, in vain so far, to make Nutch + Solr work. I'm having very hard time understanding how to go about this thing with nutch and solr. I have followed all the tutorials I could find on the internet, most of them for older versions, but I still could not make any of them work. At this moment I'm follwoing this guide
I have unpacked nutch 2.2.1, sorl 4.3.1, hbase 0.90.4 to directory on my xampp local server (none of the tutorials said where I should unpack them to, so I assumed that on local server).
I'm using Cygwin on windows 7. JAVA_HOME is pointing to /cygdrive/c/PROGRA~1/java/jdk1.8.0_05
I stuck at Configure HBase step. As the tutorial dictates I have configured /hbase-0.90.4/conf/hbase-site.xml as follows:
<property>
<name>hbase.rootdir</name>
<value>file:///C:/xampp/htdocs/trynutch/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>C:/xampp/htdocs/trynutch/zookeeper</value>
</property>
As per tutorial after this I should be able to run the following command:
$ ./trynutch/hbase/bin/start_hbase.sh
When I run it in cygwin terminal, it gives an error:
DM#comp ~
$ cd C:/xampp/htdocs/trynutch/hbase-0.90.4/bin
DM#comp /cygdrive/c/xampp/htdocs/trynutch/hbase-0.90.4/bin
$ start_hbase.sh
-bash: start_hbase.sh: command not found
I'd appreciate any information.
try with following command:
./start_hbase.sh
if its not runnable then try after making it runnable, to make runnable use following command:
chmod a+x start_hbase.sh
you just try sh start-hbase.sh from the hbase bin directory.
cd C:/xampp/htdocs/trynutch/hbase-0.90.4/bin
sh start-hbase.sh

Resources