I have Solr running on a Jetty server, and I'd like to be able to update a configuration file and have my application pick up the changes without restarting the entire server. Specifically, I'm looking for something similar to touch web.xml in Tomcat. Is that possible, and if so, how do I do it?
EDIT:
Specifically, I want to update application-specific information in an external file, and I want to get the application to load this new data all without stopping and starting the server.
There are several ways to achieve this (assuming you're thinking of general config reloading). You can have a daemon thread polling the file for last changed timestamp, and trigger a reload. Or you can check the timestamp on each configuration value lookup, if it doesn't happen to often. But my preferred way would be to expose a "reload configuration" operation either through JMX or a URL that is accessible only from the "inside".
If you are running Solr 4+ and are talking about schema.xml and solrconfig.xml, then you want 'Reload Core', which is in the Web Admin UI under core/collection management. You can also trigger it from a URL.
From Apache solr admin, go to core admin and reload respective core. If you are using solr cloud, then it is too easy. Just reload configuration via zookeeper. These changes will be visible only after complete copy.
Related
I have been struggling in getting solution for below scenario:
I have 2 instances of solr cloud on composite engine.
I have 2 instances of application rest api which calls above solr cluster for data.
From my application I wanna take backup of solr and copy zipped backup file to google storage automatically and restore it automatically with the an url endpoint.
For that I am trying to make a api endpoint in my application that will call below solr api to take back up
admin/collections?action=BACKUP
And making another endpoint that call below url to restore
/admin/collections?action=RESTORE
However after taking backup my application doesnt have access to the back up files as they are getting saved on solr instances. So I am not able to save them to google bucket.
Please guide me a simpler way to achieve this i.e automatically backup and restore solr from other GCP instance.
Have you considered something like gcs-fuse? It'll allow you to mount a GCS bucket directly on the file system.
You can then point the BACKUP command directly to the mount point for gcs-fuse on your Solr compute engine VMs, and the whole thing is abstracted away through how the VM is configured (instead of having to be manually uploaded afterwards with a separate tool when a local copy has been made).
I found GCSFuse a bit unreliable, and decided to write a wrapper script which first detects the master for the given collection and then just executed the backup directly on that node.
I am using apache solr in backend to provide search in my website. As solr is using jetty as server. I have changed jetty.xml to enable request.log file. Now I have to enable it in running setup. I cannot restart solr. How I can made these changes visible in running setup.
Not sure, I have not tried, but given that Jetty use internally log4j, you can use JMX.
Log4j registers its loggers as JMX MBeans. Using the JDK's jconsole.exe
you can reconfigure each individual loggers. These changes are not persistent
and would be reset to the config as set in the configuration file
after you restart your application (server).
In Sitecore you basically have three databases. The Core, Master and Web database.
Simply put the Core database holds all Sitecore settings. The Master database is the authoring database. So it contains all versions of any content.
Then in Sitecore you can "publish" the contents and it will publish the latest version of each content to the Web database.
So suppose I have a website with a news page. And a user is able to edit a news item from the web site (so not through the CMS). How would the database then get updated when it's set up like this?
It would probably update the Web database, but then when I go into the CMS I don't see the latest changes, since the CMS reads from the Master database, right?
So does that mean that it should write twice? Once to the Web database and once to the Master database?
Can anyone tell me how this works in Sitecore or the like?
The reason I'd like to know this is becasue I'm thinking of creating a similar database setup. And I'm just not sure how to solve this issue.
When you have items that needs to be updated by the website visitor, you need to use the SitecoreService SOAP webservice or create your own custom webservice that runs on the Master-instance and triggers a publish after updating.
Well, Sitecore has a publishing step. When the user publishes in Sitecore, it updates the Web database at that point. If you want to build a similar system, I would simply store all versions of an item in the Master database and only when the user chooses to publish, copy the latest version to the Web database.
If your site
- generates a lot of comments
- generates the comments continuously
- uses multiple content delivery servers
- requires CMS users to manage them
I would not store the comments as content items.
The reason is HTML cache and publishing behavior.
On high volume site you'd most certainly use html caching to achieve best possible performance. If a publish is required to show comments, you'd need frequent publish actions and thus html caches are cleared often.
You don't wan't that :-)
Modeling after the DMS implementation is the safest (not cheapest and Datatables isn't something I recommend these days), storing stuff in a separate database, possibly using queuing to prevent an overload if things get busy..
We are developing a Grails web application, where different users (customers) need to be pointed at different databases containing only their organization's data. Unfortunately, the separated databases are a requirement, and we are being asked to be able to have only 1 web application for everybody.
However, Grails expects only a single datasource pool connecting to one database.
We want to be able switch database connections, per session, based on the user that is logged in, where the different connections are read in from properties files during the BootStrap init().
So far, we have been unable to find a solution that does not seem to have race conditions, there is no plugin we can find, and it doesn't seem to be a popular issue.
Our most promising was creating a custom dynamic data source, set up in Bootstrap to define a map of organization->dataSource, and utilizing a closure defined in Bootstrap to find the appropriate dataSource before GORM behavior, but this seems to cause race condition when there is latency.
Does anyone have an idea how this switching can legitimately be performed?
Thanks
Considering Grails is built upon Spring your best bet is to develop your own resolvable datasource.
Dynamic datasource routing
Example of datasource routing
It's not clear in your question if you're deploying your application once, and trying to configure the DataSource used by User, or if you just want to configure by deployment.
If it's just per deployment, Grails allow you to externalize the configuration. You can set this to use a file in the classpath or in a static location.
I am making a search query on local Apache Solr Server by browser and see the results.
I want to make Same Query on the production server.
Since tomcat port is blocked on production, I cannot test the query results on the browser.
Is there any method to make query and see the results?
Solr is a java web application: if you can't access the port it's listening to, you can't access Solr itself. There's no other way to retrieve data from a remote location. Usually on production Solr is put behind an apache proxy, so that it protects the whole Solr and makes accessible only the needed contexts, in your case solr/select for example to make queries.