I have a solr instance up and running and I can visit the solr admin page without any problem. I have setup a solr multicore with one core for ckan and another core for a different application. I can see two different collections as well in the admin page. I don't understand why ckan is not able to connect to Solr. I have even include solr site url in production.ini.
ckan.lib.search Problems were found while connecting to the SOLR server
Edit # 1: I have installed ckan from Source; I already had Solr running so all I did was added a new core & collection for ckan in an existing solr instance
Related
I'm struggling with Apache solr 8.8 core configuration.
I've got core configuration files ready (generated by Drupal 9 Search API solr module).
Then I tried to put these files there solr/cores/drupal_core and there solr/current/server/solr/cores/drupal_core
When I go to the UI I try to create new core but I've got an access denied...
I'm not sure to understand. I've tried to change permissions on core folders, but nothing is working... Has someone some idea of what is happening ?
I am implementing Solr Cloud for the first time. I've worked with normal Solr and have that down pretty well, but I'm not finding a lot on what you can and can't do with Solr Cloud. So my question is about Managed Resources. I know you can CRUD stop words and synonyms using the new RESTful api in solr. However with the cloud do I need to CRUD my changes to each individual solr server in the cloud, or do I send them to a different url that sends them through to each server? I'm new to cloud and zookeeper. I have not found anything in the solr wiki about working with the managed resources in the cloud setup. Any advice would be helpful.
In SolrCloud configuration and other files like stopwords, are stored and maintained by Zookeeper. Which means you do not need to individually send updates to each server.
Once you have SolrCloud, before putting in any data, you will create a collection. Each collection has its own set of resources/config folder.
So for example if u have a collection called techproducts with 2 servers localhost1 and localhost2 the below command from any of the servers will work on the same resource.
curl "http://localhost1:8983/solr/techproducts/schema/analysis/synonyms/english"
curl "http://localhost2:8983/solr/techproducts/schema/analysis/synonyms/english"
Firstly Thanks to stackoverflow which is giving support to everyone.
Iam new to drupal and solr server
I have Successfully installed the solrserver in my system and I can able to search the data using "Apache Solr search module" In drupal7.
But Actually I dont know what is the Background process that is Running.But Inorder to have work with it I need to have a ground knowledge on it.Drupal is connecting to solr server using the url which I have Provided in admin UI.
As Per My knowledge I think the following is the backend flow of Apache solr server module
1)It sends the request of search string from drupal to solr server.
2)The solr server searches for the string and send the result back in the format of json to drupal.
3)Drupal displays the results
But How the solr server connects to drupal db inorder to search for the string or content?
Please help with this..I really In a need to know the backend flow how the request is handling
Thankyou
I'm not a Drupal specialist, but from the Solr prospective you are searching on the documents previously indexed on Solr. I.e., all documents must be indexed on Solr prior to the search.
Therefore, you have 2 ways here:
You call Solr API from your backend and push documents to Solr index. There are specific drupal solutions you may research, but here is the wiki article from Solr prospective describing how to index documents using only JSON API: http://wiki.apache.org/solr/UpdateJSON
You connect to your database directly from Solr and pull documents to Solr index. Here is the related wiki page: http://wiki.apache.org/solr/DataImportHandler
Ok, so I'm trying to setup nutch to crawl a site and index the pages into solr. I'm currently using Nutch 1.9 with Solr 4.10.2
I've followed these instructions: http://wiki.apache.org/nutch/NutchTutorial#A4._Setup_Solr_for_search
The crawling appears to go just fine but when I check the collection on Solr (using the web ui) there are no documents indexed...any idea where I could check for problems?
Found my problem, I'll leave it as an answer in case anyone else has the same symptoms:
My problem was the proxy configuration. My linux box has the proxy configured to be applied system-wide, but I also had to configure Nutch to use the same proxy. Once I changed that, it started to work.
The configuration is under config/nutch-default.xml
Edit with more info
To be more specific, here is the Proxy configuration I had to change:
<property>
<name>http.proxy.host</name>
<value>xxx.xxx.xxx</value>
<description>The proxy hostname. If empty, no proxy is used.</description>
</property>
I am using apache solr and I have created another core and its works fine.But once i shutdown my server and restart it,the new core gets deleted. But the folder seems to be there in the solr dir.Can any one tell me why does it get deleted from my apache solr? Thanks in advance
Check for the persistent attribute in the solr.xml <solr persistent="true"> which will persist the changes made through Admin UI and these would be available after restarts as well.
If persistence is enabled (persist=true), the configuration for this
new core will be saved in 'solr.xml'.