Karaf cluster synchronization - apache-camel

I have some misunderstandings about Karaf cluster Cellar replication.
We have 4 nodes, and all these nodes surrounding in the cluster.
First problem:
If we install feature in cluster (cluster:feature install) and execute command cluster:bundle-list default, we see that all bundles in this feature have local type in column Located, but the feature has cluster/local type.
How can I achieve that bundles have cluster/local type?
Second problem:
When we install feature in cluster (cluster:feature install) all bundles exist in all nodes (auto sync) but then when we synchronize manually (cluster:sync -g default) for example on 3 nodes we see that bundles does not exist on it any more.
To return feature bundles to 3 nodes we have to uninstall and install feature in cluster again.
Please comment this strange behaviour.
Version:
apache-servicemix-7.0.1
karaf 4.0.9
cellar 4.0.4
camel 2.16.5
CXF 3.1.9

Related

MariaDB version 10.5.9 unable to install

In my current workplace we are using MariaDB version 10.5.9 for our DB's and we are trying to reinstall this version for testing purposes on a separate container. However, seems anything from 10.5.9 below is failing with the follow error;
root#mdb-10-5:~# curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash -s -- --mariadb-server-version=mariadb-10.5.9
# [info] Checking for script prerequisites.
# [warning] Found existing file at /etc/apt/sources.list.d/mariadb.list. Moving to /etc/apt/sources.list.d/mariadb.list.old_5
# [error] MariaDB Server version 10.5.9 is not working.
# Please verify that the version is correct.
#
# The latest MariaDB Server versions are:
# 10.10.1 10.3.36 10.4.26 10.5.17 10.6.10 10.7.6 10.8.5 10.9.3
#
# More information on MariaDB releases is available at:
# https://mariadb.com/kb/en/release-notes/
When I try the same command with version 10.5.10, it works and downloads successfully.
I am using the following procedure, one of which is the MariaDB KB:
https://mariadb.com/kb/en/mariadb-package-repository-setup-and-usage/
https://www.dbi-services.com/blog/how-to-install-a-specific-version-of-mariadb/
Both guides use the same repo, and it is also the only thing that I have found specific when I search for information to install this particular version or MariaDB.
Can anyone offer any suggestions or have experienced similar problems?
We (MariaDB corporation) recently moved over our repositories to a content delivery network instead of using our own servers only. Unfortunately the new service does not have a full archive of older releases yet, the oldest 10.5 we have on there for example is 10.5.10.
I have filed an internal bug report / feature request about that already, but it is still pending.
Meanwhile you can "fix" this by first running the repo setup script with a supported version like 10.5.10, and then editing the repository file it created, replacing the version number with 10.5.9, and the host name dlm.mariadb.com with download.mariadb.com.
On Debian and Ubuntu the repository file would be /etc/apt/sources.list.d/mariadb.list, and you'd have to run apt-get update afterwards to pick up the repo change before installing packages.
On RHEL, CentOS, Rocky etc. the file is /etc/yum.repos.d/mariadb.repo and no further action is needed before installing actual packages.

Custom configuration file flink-conf.yaml

I need to specify different Flink settings for different applications. In other words, each application should run with its custom file flink-conf.yaml. What is the proper way to do it?
I found some old recommendations to declare FLINK_CONF_DIR pointing to a custom directory with Flink configuration files (for example: How could I override configuration value in Apache Flink?). However, the official Flink documentation does not mention the FLINK_CONF_DIR variable at all (as of Flink 1.13). Therefore I have doubts, that this way is officially recommended and supported by Flink developers.
UPDATE 1: Details on application running
I am running Flink on YARN in the Application mode. Here is how I launch the application:
"$flink_home/bin/flink" \
run-application \
--target yarn-application \
--class com.example.App1
The out-of-the-box Flink configuration is located in the $flink_home/conf directory. As I have several applications App1, App2, ..., I want them to use their respective Flink configurations instead of the out-of-the-box configuration.
TL;DR: The paragraph about FLINK_CONF_DIR was accidentally removed when the Flink on YARN docs were rewritten for the Flink 1.12 release. It is still the intended and supported way to establish per-application settings in YARN clusters.
Other ways to override the configuration:
You can override the settings specified in the cluster's flink-conf.yaml file with settings you specify on the command line, as described in this answer.
You can also override specific settings from the global configuration in your code, e.g.:
Configuration conf = new Configuration();
conf.setString("state.backend", "filesystem");
env = StreamExecutionEnvironment.getExecutionEnvironment(conf);
You can also load all of the settings in a flink-conf.yaml file from your application code, via
FileSystem.initialize(GlobalConfiguration.loadConfiguration("/path/to/conf/directory"));
And with Kubernetes you can mount different ConfigMaps for different applications.

Flink 1.9.1 No FileSystem for scheme "file" error when submit jobs to cluster

we are recently upgrading our flink cluster to version 1.9.1. Error related to hadoop s3a occurs. The message is as below.
2020-01-16 08:39:49,283 ERROR org.apache.flink.runtime.blob.BlobServerConnection - PUT operation failed
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "file"
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3332)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:456)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
at org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.create(HadoopFileSystem.java:141)
at org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.create(HadoopFileSystem.java:37)
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:73)
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:69)
at org.apache.flink.runtime.blob.BlobUtils.moveTempFileToStore(BlobUtils.java:444)
at org.apache.flink.runtime.blob.BlobServer.moveTempFileToStore(BlobServer.java:694)
at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:351)
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:114)
I guess the s3 hadoop filesystem is trying to create local files but it cannot find 'file' filesystem. Can anyone advise the potential problem here?
Thanks
The plugin loader had a shortcoming in 1.9.0 and 1.9.1 that prevented the plugins from lazily loading new classes. It's fixed in the upcoming 1.9.2 and 1.10 releases.
For the time being, you could simply add the jar to the lib folder as a workaround. Note, however, that in 1.10 you can only use s3 through plugins, so keep that in mind when you would upgrade.

Flink 1.4 throws errors

just trying to migrate from flink 1.3 into 1.4 and getting this exception on
linux machine:
(not reproducing at windows).
i've import this package also:
// https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2
compile group: 'org.apache.flink', name: 'flink-shaded-hadoop2', version: '1.4.0'
any help?
at flink console:
TriggerWindow(TumblingProcessingTimeWindows(10000), ReducingStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer#cb6c5dba, reduceFunction=com.clicktale.reducers.MetricsReducer#4e406694}, ProcessingTimeTrigger(), WindowedStream.reduce(WindowedStream.java:241)) -> Sink: Unnamed (1/1)
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.LocalFileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2364)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2375)
at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:99)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.createHadoopFileSystem(BucketingSink.java:1154)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initFileSystem(BucketingSink.java:411)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initializeState(BucketingSink.java:355)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:259)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:694)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:682)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
I faced a similar (not specifically this, but dependencies related) issues migrating from 1.3 to 1.4.
In my case, I had to re-generate a fresh POM file using maven archetype and then add the needed dependencies one by one.
See Java Quickstart or Scala Quickstart.
Reason being that there has been a major rework on dependency structure. See Release notes for more information.
Note that Flink 1.4 will load any Hadoop jars found via the "hadoop classpath" shell command, and these will be first on the classpath. So if you have an incompatible version of Hadoop installed that the "hadoop" command points at, you can run into this kind of problem.

How to upgrade Solr 5 version already in production on Linux (installed as a service)?

What is the best way to update a Solr 5 version in production (in other words installed as a service) on Linux? I have an already installed Solr 5.0 (via the Service Installation Script) and now need to upgrade it to Solr 5.2.1. Realizing some of the config files will need to be changed to take advantage of recent changes, after stopping the current instance, is the best way to simply run the new Solr 5.2.1 Service Installation Script or just untar the 5.2.1 solr-5.2.1.tgz to /opt or something else? Fortunately, I have a very simple set up (not SolrCloud).
After actually looking into the /opt folder it is fairly obvious I just need to untar solr into that folder and change the solr symbolic link to point to the new version. This should work most of the time keeping in mind that occasionally, as Jay pointed out, there could be changes to the solr files that could possibly require more than this.

Resources