Unable to update logback configuration in Flink Native Kubernetes - apache-flink

I have created a Flink Native Kubernetes (1.14.2) cluster which is successful. I am trying to update the logback configuration for which I am using the configmap exposed by Flink Native Kubernetes. Flink Native Kubernetes is creating this configmap during the start of the cluster and deleting it when the cluster is stopped and this behavior is as per the official documentation.
I updated the logback configmap which is also successful and this process even updates the actual logback files (conf folder) in the job manager and task manager. But Flink is not loading (hot reloading) this logback configuration.
Also I want to make sure that the logback configmap configuration is persisted even during cluster restarts. But the Flink Native Kubernetes recreates the configmap each time the cluster is started.
What is that I am missing here? How to make the updated logback configuration work?

May be you can refer 'Configuring logback' in https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/advanced/logging/

Solution
To start Flink Native Kubernetes cluster, we will call kubernetes-session.sh. We just need to update the local conf folder before calling kubernetes-session.sh. The following are the files to be updated based on our requirements.
flink-conf.yaml
logger
logback-console.xml (when using logback)
log4j-console.properties (when using log4j)
How does it works
Basically the files from the local conf folder is used to create the configmap in Kubernetes which contains the above mentioned 3 files irrespective of whether you use logback or log4j. The contents from the configmap is then copied to the containers (jobmanager and taskmanager) when the containers are started.
Reference
Thanks to Sharon Xie who gave the solution in the flink mailing list.

Related

Building Flink application mode without using Flink base image

I found Flink's application deployment is opinionated and inflexible. For example:
Some dependencies need to be marked as provided since they are bundled within the Docker image or the Flink's base image.
If I want to change log4j layout type, I need to change Flink client's local file.
Environment variables and k8s secrets need to be passed in via the deployment command's options, which can get very long.
How do I build a Flink application to run on application mode without using Flink as base image? Ideally, an application where all dependencies are managed in 1 place, log configuration is baked into the app and k8s deployment yaml where env vars and secrets can be specified and the base image provides only JDK/JRE.

How to configure graphite metrics reporter for kinesis data analytics application

I am running a Flink application as part of the AWS Kinesis Data Analytics service. Flink has built in support for metrics and I have a simple counter setup that I can see is working, it is available in the flink dashboard.
Now, I want to configure graphite to be used as collecting my metrics. According to Flink this is possible: https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/metric_reporters/#graphite
My problem is that I can not get the Flink application to read my configuration.
I have tried:
Creating the file conf/flink-conf.yaml together with the java code but it seems to be ignored.
Passing in a configuration override to StreamExecutionEnvironment.getExecutionEnvironment(configuration) , but also seems to be ignored.
How do I get metrics reported to graphite?
Got response from Amazon Support:
Since this is a managed service, it is not possible to change the configuration at this time.

deployment of queue.xml to new non-default version does not create queues

I am trying to use task queues in GAE Java 8, but somehow it does not seem to deploy correctly via the queue.xml file. I can also not see the task queues in the Cloud Tasks console (which is where I get redirected from the app engine console).
I get an error java.lang.IllegalStateException: The specified queue is unknown : xxxxx when running the app.
The app runs fine locally. I can see the task queues appearing locally in the admin page.
Does this mean that I cannot deploy task queues via queue.xml anymore?
You should be aware that the queue configuration is not a per-version config (or even a per-service one!), it is a global, per-application config. Or per-project if you want - considering that there can only be one GAE application per GCP project.
This single queue configuration is shared by all versions of all services of your application, so:
if/when services/versions need different queue configs all of them need to be merged into a single file for deployment.
pay attention at deployment not to overwrite/negatively impact existing services/versions
While in some cases the queue.xml file might be deployed automatically when you deploy your application code it is not always the case. The official recommended deployment method is using the deployment command dedicated for the queue configuration, which can be performed independently from deploying application/service code. From Deploying the queue configuration file:
To deploy the queue configuration file without otherwise altering the
currently serving version, use the command:
appcfg.sh update_queues <application directory>
replacing <application directory> with the path to your application
main directory.
Pay extra attention if you have:
other non-java standard environment services in your app - they use the queue.yaml queue configuration file, managing the same content in 2 different files/formats can be tricky
other services managing queue via using Cloud Tasks. See Using Queue Management versus queue.yaml.

How to change jetty log configuration at run time in solr

I am using apache solr in backend to provide search in my website. As solr is using jetty as server. I have changed jetty.xml to enable request.log file. Now I have to enable it in running setup. I cannot restart solr. How I can made these changes visible in running setup.
Not sure, I have not tried, but given that Jetty use internally log4j, you can use JMX.
Log4j registers its loggers as JMX MBeans. Using the JDK's jconsole.exe
you can reconfigure each individual loggers. These changes are not persistent
and would be reset to the config as set in the configuration file
after you restart your application (server).

Can I update an application configuration file without restarting the Jetty server?

I have Solr running on a Jetty server, and I'd like to be able to update a configuration file and have my application pick up the changes without restarting the entire server. Specifically, I'm looking for something similar to touch web.xml in Tomcat. Is that possible, and if so, how do I do it?
EDIT:
Specifically, I want to update application-specific information in an external file, and I want to get the application to load this new data all without stopping and starting the server.
There are several ways to achieve this (assuming you're thinking of general config reloading). You can have a daemon thread polling the file for last changed timestamp, and trigger a reload. Or you can check the timestamp on each configuration value lookup, if it doesn't happen to often. But my preferred way would be to expose a "reload configuration" operation either through JMX or a URL that is accessible only from the "inside".
If you are running Solr 4+ and are talking about schema.xml and solrconfig.xml, then you want 'Reload Core', which is in the Web Admin UI under core/collection management. You can also trigger it from a URL.
From Apache solr admin, go to core admin and reload respective core. If you are using solr cloud, then it is too easy. Just reload configuration via zookeeper. These changes will be visible only after complete copy.

Resources