How to change jetty log configuration at run time in solr - solr

I am using apache solr in backend to provide search in my website. As solr is using jetty as server. I have changed jetty.xml to enable request.log file. Now I have to enable it in running setup. I cannot restart solr. How I can made these changes visible in running setup.

Not sure, I have not tried, but given that Jetty use internally log4j, you can use JMX.
Log4j registers its loggers as JMX MBeans. Using the JDK's jconsole.exe
you can reconfigure each individual loggers. These changes are not persistent
and would be reset to the config as set in the configuration file
after you restart your application (server).

Related

How to configure graphite metrics reporter for kinesis data analytics application

I am running a Flink application as part of the AWS Kinesis Data Analytics service. Flink has built in support for metrics and I have a simple counter setup that I can see is working, it is available in the flink dashboard.
Now, I want to configure graphite to be used as collecting my metrics. According to Flink this is possible: https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/metric_reporters/#graphite
My problem is that I can not get the Flink application to read my configuration.
I have tried:
Creating the file conf/flink-conf.yaml together with the java code but it seems to be ignored.
Passing in a configuration override to StreamExecutionEnvironment.getExecutionEnvironment(configuration) , but also seems to be ignored.
How do I get metrics reported to graphite?
Got response from Amazon Support:
Since this is a managed service, it is not possible to change the configuration at this time.

Unable to update logback configuration in Flink Native Kubernetes

I have created a Flink Native Kubernetes (1.14.2) cluster which is successful. I am trying to update the logback configuration for which I am using the configmap exposed by Flink Native Kubernetes. Flink Native Kubernetes is creating this configmap during the start of the cluster and deleting it when the cluster is stopped and this behavior is as per the official documentation.
I updated the logback configmap which is also successful and this process even updates the actual logback files (conf folder) in the job manager and task manager. But Flink is not loading (hot reloading) this logback configuration.
Also I want to make sure that the logback configmap configuration is persisted even during cluster restarts. But the Flink Native Kubernetes recreates the configmap each time the cluster is started.
What is that I am missing here? How to make the updated logback configuration work?
May be you can refer 'Configuring logback' in https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/advanced/logging/
Solution
To start Flink Native Kubernetes cluster, we will call kubernetes-session.sh. We just need to update the local conf folder before calling kubernetes-session.sh. The following are the files to be updated based on our requirements.
flink-conf.yaml
logger
logback-console.xml (when using logback)
log4j-console.properties (when using log4j)
How does it works
Basically the files from the local conf folder is used to create the configmap in Kubernetes which contains the above mentioned 3 files irrespective of whether you use logback or log4j. The contents from the configmap is then copied to the containers (jobmanager and taskmanager) when the containers are started.
Reference
Thanks to Sharon Xie who gave the solution in the flink mailing list.

Solr reload is not picking up the latest changes from Zookeeper

Solr reload is not picking up the latest changes from Zookeeper.
Configuration Details:
Solr: 6.0.0
Zookeeper: 3.4.6
OS: AWS Linux
We are facing an issue that when we RELOAD an existing collection in solr to pick up the latest configuration from zookeeper (which we upconfig in Zookeeper) the changes are not reflected on the collection. We verified the same using the solr admin ui. But the changes we made were available on zookeeper, we have verified the same by connecting to zookeeper using zkcli.sh script.
Note: We are able to see the latest config picked correctly if we create a new collection based on the config we uploaded to zookeeper. Issue occurs only for the solr RELOAD call.
Please help us narrow down the issue.

I have created a new core in solr and once I restart my tomcat server the core gets deleted.why does the core in solr gets deleted?

I am using apache solr and I have created another core and its works fine.But once i shutdown my server and restart it,the new core gets deleted. But the folder seems to be there in the solr dir.Can any one tell me why does it get deleted from my apache solr? Thanks in advance
Check for the persistent attribute in the solr.xml <solr persistent="true"> which will persist the changes made through Admin UI and these would be available after restarts as well.
If persistence is enabled (persist=true), the configuration for this
new core will be saved in 'solr.xml'.

Can I update an application configuration file without restarting the Jetty server?

I have Solr running on a Jetty server, and I'd like to be able to update a configuration file and have my application pick up the changes without restarting the entire server. Specifically, I'm looking for something similar to touch web.xml in Tomcat. Is that possible, and if so, how do I do it?
EDIT:
Specifically, I want to update application-specific information in an external file, and I want to get the application to load this new data all without stopping and starting the server.
There are several ways to achieve this (assuming you're thinking of general config reloading). You can have a daemon thread polling the file for last changed timestamp, and trigger a reload. Or you can check the timestamp on each configuration value lookup, if it doesn't happen to often. But my preferred way would be to expose a "reload configuration" operation either through JMX or a URL that is accessible only from the "inside".
If you are running Solr 4+ and are talking about schema.xml and solrconfig.xml, then you want 'Reload Core', which is in the Web Admin UI under core/collection management. You can also trigger it from a URL.
From Apache solr admin, go to core admin and reload respective core. If you are using solr cloud, then it is too easy. Just reload configuration via zookeeper. These changes will be visible only after complete copy.

Resources