I have a wordpress site with admin.example.com address. I created 3 replicas that sync with primary wordpress website (admin.example.com). the addresses of 3 replicas are example.com. I get all files and a dump from admin.example.com database. I put the files in a right path of 3 replicas and change addresses in database in following way:
sed -i 's/admin.example.com/example.com/g' admin.example.com_database.sql
but I have a problem: some breadcrumbs not shown in replicas.
anyone has an idea?
Related
We migrated our 3 Solr servers to 3 new VMs. We are still running the same setup as we were on the old VMs (Solr 7.4). I've also ran the Zookeeper upconfig command to replace our old config files so that they now use the new IPs. However, when I view the Solr Cloud UI, 2 of our old IPs are being shown on the Cloud > Graph. I verified that the Zookeeper upconfig worked because I can see that the new configset files for my collection are there in the cloud > tree > configs. They are also showing the new IPs in the files. So I'm not sure why the Cloud > Graph is showing 2 of our old IPs. Also, when I checked the logs, I see the following error:
null:org.apache.solr.common.SolrException: Error trying to proxy request for url: http://139.XX.XX.34:8983/solr/MyCollection/select
The IP that the error is mentioning is the IP for one of the old VMs. There is also another error message like that for the other old IP. Any ideas where it could be getting the IP value from? I thought that the Zookeeper upconfig would've fixed this and I've searched all of my solr and zookeeper files to see if there was a config file I missed, but didn't find any that mentioned the old IPs anywhere.
You need to remove the old servers replica from Solr cluster if they still appear in Solr UI.
You can remove them from Solr UI under collections menu. There's a red cross button for each replica.
I created a Django website and filled the database with some entries. I then uploaded the site to Heroku. Later, I added more entries, but I mistakenly added them to the remote database.
I would like to (1) sync the local and remote databases (so that all the new material I added to the remote database gets copied to the local database) and (2) from now on work in such a way that I add entries to the local database first and then those entries get added to the remote database when I git push.
Any idea how to do this? Thank you.
From last couple of weeks I am using SolrCloud on 3 development server with a single Load Balancer (in future I will extend it to 5 different server for Zookeeper and Solr). My current SolrCloud structure is like below.
Server 1 : Java + Solr(port 8983) + Zookeeper(port 2181)
Server 2 : Java + Solr(port 8983) + Zookeeper(port 2181)
Server 3 : Java + Solr(port 8983) + Zookeeper(port 2181)
Here I am able to create SOLR configuration from any server by uploading conf of my collection & RELOAD the collection using COLLECITON API, all my SOLR configuration is syncing and I am able to index and search my document perfectly. My collection had 1 shard and 3 replica, then I split the single shard to two. So basically its a single collection with 3 shard and 3 replica now.
So, now I have some questions
Q1) Is my current structure is OK ? or I need to change my current structure ?
Q2) How can I Backup and Restore my indexed collection data ?
Q3) What would happen if one of my server closed connection and then I am trying to backup and restore my solr data?
As I have seen the COLLECTION API endpoint to Backup and Restore collection data here at https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-Backup
but couldn't figure out how to set the path/to/my/shard/drive and all that things on those two API endpoint to backup and restore my indexed data. Need help badly
I have faced similar problem Solr collection api provides backup of complete collection from solr v6.0
Using Spring Solr Data or Not for Flexible Requests as Like Backup?
goto above link you can get backup in that way
need call backup command on each shard
use location param to set path/to/my/shard/drive.
This path should be present on all your servers 1,2,3.
When running restore API, you need to provide the same Path.
restore will recover each shard using data present on path/to/my/shard/drive.
If you dont want to Backup on local filesystem, you can use hdfs as backup filesystem.
This can be done by adding a new repository in solr.xml. And using this repository name in Backup/Recovery API.
location and repository options are mutually exclusive.
I am currently working in google app engine. I was testing some feature in my local and suddenly some of the tables in my local datastore got deleted. Originally it had 20+ tables and now it is displaying only 6 tables. I thought my bin file got corrupted and tried to replace my current bin file with my back-up bin file. But, still it is showing only 6 tables instead of 20+. Any suggestions?
By default the local dev server keeps its data in /tmp. If you restart your machine, that directory is usually cleared.
You can specify another directory by starting the dev server with the --datastore_path command-line argument, as described in the documentation.
I want to be able to run a SOLR instance on my local computer by have the index directory on a remote server. Is this possible ?
I've been trying to look for a solution for days. Please help.
Update: We've got a business legal requirement where we are not allowed store client data on our servers ... we can just read, insert, delete and update it on Client request via our website and the data has to be stored on client servers. So each client will have their own index and we cannot run SOLR or any other web application on Client's server. Some of the clients have dropbox business account. So we thought may be just having the SOLR index file upload to dropbox might work.
enable remote streaming in solrConfig.xml and configure the remote file location in it.
It's working