Karaf : feature:install restarts previous bundles - apache-camel

I'm facing an irritating behavior from my karaf server: Title says it all, installed bundles get restarted when I use a feature: install command.
* Project context *
Most of the bundles I deal with are camel routes, the other ones are common tools, shared by the routes.
As a result, I have a 2 level project: a common part that is installed first, and the camel routes that all depends on the common part (dependent on Maven point of view).
* Scenario *
start a fresh instance of karaf
install the common features
install a camel route feature: no troubles so far
install a second camel route feature: the bundles from the previously installed feature will restart.
* Breakthrough made *
All the bundles declared a common config file, with the option "update-strategy=reload". This means that karaf would notify each bundle of any modification of this file, and the bundle would restart to take it into account.
As a matter of fact, when I installed a new bundle with a dependency on this file, it would be read in order to initialize the bundle's properties, and karaf considered it to be a file modification. Therefore, installing a new bundle made all the others restart.
As you expect, I dealt with that problem by removing the update-strategy option, and most of my features are now clean.
* Leftovers *
BUT, some of them still holds the bug: Installing any of those troublesome features will have all the other installed features restart. This is a ONE-WAY problem, installing a clean bundle will not have the troublesome ones restart.
I checked anyway, but no other config file could be responsible for that.
Any help or advice would be appreciated, I can also provide anonymized examples of any file that would help you understand, like an osgi-context or a feature's pom.xml
One last thing: my features regroup around 50 bundles each, therefore I can barely understand the karaf logs, and I can't pinpoint which bundle is restarted first.
Thanks for your time and attention!

I think there are some misconceptions in what you describe.
update-strategy=reload does not cause a bundle to reload. It causes a blueprint context to reload.
You should also not share the some config between bundles it is known to mess up you deployments.
There are also other reasons why a bundle may restart. A karaf feature install tries to provide the optimal set of bundles that is needed overall in karaf to satisfy the set of currently installed features.
A typical case is that you first install feature with a bundle containing an optional package import. At this moment it can not provide the package. Then you install a second feature that provides an exporter of the package. Now the optional dependency of the bundle can be satisfied and the bundle will be restarted by karaf.
You can look into such cases by using feature:install -v . This will show you which bundles are restarted and also why. So maybe this can help you to debug why the restart happens.

Related

Feature repo add in KARAF Error Resolving Artifact : Could not read from local repo

Error executing command: Error resolving artifact
com.sample:features:xml:features:4.1.4-SNAPSHOT:
\[Could not find artifact com.sample:features:xml:features:4.1.4-SNAPSHOT in default
local (file:/C:{/Users}.m2/repository/),
Could not find artifact com.sample:features:xml:features:4.1.4-SNAPSHOT in apache
(https://repository.apache.org/content/groups/snapshots-group/),
Could not find artifact com.sample:features:xml:features:4.1.4-SNAPSHOT
in ops4j.sonatype.snapshots.deploy
(https://oss.sonatype.org/content/repositories/ops4j-snapshots/)\] :
com.sample:features:xml:features:4.1.4-SNAPSHOT.
I could not resolve this error after uncommenting local repository in org.ops4j.pax.url.mvn and add my local .m2 path
Thanks in advance for your suggestions.
You might have to reset your Apache karaf instance to fresh state.
Stop Karaf
Delete data folder from Karaf installation folder.
Start Karaf
Re-Install the features and bundles you need.
Alternatively you can try removing any features, feature-repositories and bundles that have been installed from there before removing the maven repository.
It's generally bad idea to remove/replace maven repositories from Apache Karaf from where user(s) might have already added new feature repositories, installed features and bundles.
Once tried to replace offline repository I had made with a new one which was missing some older artifacts. This lead to bunch of issues like inability to uninstall some of the older features.

[Apache Flink]: Where is flink-s3-fs-hadoop plugin?

I would like to read and write some data with Apache Flink 1.11.2 from S3. The documentation recommends to use the presto plugin for checkpoints and the hadoop plugin for pipeline data.
After reading this section you have to copy the plugins from /opt to /plugin. I can find the flink-s3-fs-presto-1.11.2.jar under /opt but there is no flink-s3-fs-hadoop-1.11.2.jar. Where can i find the s3-hadoop plugin for setting up my production environment?
And how can i use these plugins in the IDE? Simply adding these to pom.xml als provided dependencies? And then how can i pass the crentials in IDE?
That is weird I can see that they are both present in the official binaries in opt in 1.11.1. However if You can't find them, You can simply try to get the jars from Maven here and copy them to the required place. Another thing that may work is adding the dependency into the project with compile scope.
Running the job locally is described here. There are various ways of configuring the credentials when running the job in IDE, one might be adding core-site.xml to resources folder with proper configruation.
EDIT:
As for the local execution it was explained here a little bit.

Gradle for AngularJS Application

I´m pretty new to gradle and currently there is question that bugs me. The situation appears as follows: Based on a bower technology stack I implemented an Angular App. The app as it is doesn't change nor has it to be built in any way since there are just static javascript and HTML pages. In my opinion the used versions of angular, bootstrap and other libraries should also stay the same due to compatibility of the single libraries so these files also shouldn't change. Is this a correct behavior or should I get at least the latest build of the used libraries version as I deploy the application?
Also less is used in the application. Is there a way to compile the CSS every time I run the gradle build file or should I deliver just the compiled and finished CSS file?
As a result I´m also not quite sure if it's recommended at all using gradle to deploy an "static" angular application.
I hope someone out there can help me to answer the questions above. As you can guess, I´m not very experienced at deploying of such angular applications since this is my first project with this kind of problem.
This goes for all package managers, not just gradle but npm, gem, nuget, maven, whatever.
Use static dependancy version numbers. Otherwise you will end up finding breaking dependencies in QA or Prod rather than production.
This means you need to be aware of security fixes in your dependencies.
When you need a feature or fix in a new version of your dependencies, unlock the versions, rebuild and test in dev. Re-lock the dependencies and send to QA for verification.

Not able to start the bundle in servicemix

I have a bundle up and running in Servicemix. I went to my company's repository and downloaded the corresponding JAR to my local machine. I extracted that JAR and found out that this JAR had only one folder META-INF.
Inside this folder, there is a Manifest.mf file and my resources such as Spring configuration file and Camel Context file.
there I got my first question: where are the source files of this JAR i.e. JAVA classes and all. Only thing I saw there was manifest file, pom.xml, another pom properties file and couple of other configuration files for spring and camel.
this led to my next step. I had a local copy of this project in my workspace as well. I build this project locally and found the JAR in target directory of the project.
Now following steps might seem silly but anyway I did little experiment. I extracted this JAR which I found in target and extracted it to see the content. I believed it was a bundle because I used maven-bundle-plugin and there is no way you could tell by looking at a JAR that its just a JAR or an OSGI bundle. ok so I extracted the JAR and guess what this time it did have the compiled java classes.
this is not the end, I did something silly again. I removed the compiled classes from this JAR and made it exactly same as which I copied from my Company's central repository. Now I used a JDK's JAR creation utility to create a JAR.
Now I have two JARS:
one which I downloaded from company's central repo.
another one which I created myself. it has exactly same content as the other one. I even used the same manifest.mf while creating this JAR. (Since I knew Manifest is the backbone of an oSGI Bundle).
I secure copied this bundle in my server's home directory. and finally, I installed this Bundle/JAR in Servicemix using :
install file:path_to_JAR/JAR_FILE_NAME.
it got installed successfully. but when I tried to start this bundle. it could not start. by using display-exception, I saw the exception : it wasnt able to load the beans and could not
initialize the Application Context followed by a more specific exception "ClassNotFound" exception. I understand that it wasnt able to find the classes defined in my application context. BUT WHYYYYYYYYYY?
I did exactly same steps and I checked it multiple times. if mine could not start, why the earlier one is up and running.
It might sound silly for others who have worked in OSGI environment, But now I am starting to re consider especially ServiceMix.
Thanks for any suggestion.
This is nothing about OSGi, it's more something about your application.
As I don't know your project I just can do some assumptions.
First the jar you got from the Company Repository is most likely an "older" version and not the same as your local sources. With Servicemix it's quite possible to just have blueprint or spring xmls in your bundle cause those are valid resources a Camel-Blueprint/Spring extender are able to pick up. Those XMLs are interpreted and if those only make use of standard Camel Components there is no reason to have a single Class inside your bundle.
Now back to your newly created Bundle, obviously you have some new "Code" in your camel-xml which requires not only standard Camel classes but also some Processes you created on your own, now those classes need to stay in the Bundle!
Best just deploy the newly created Bundle with all it's classes. You should rather check what has changed in the camel xml files.

DataNucleus Enhancer doesn't work

I'm writing a web app using Google AppEngine and Spring MVC. I carefully upgraded to the v2 of the DataNucleus pluging by following these steps: http://code.google.com/p/datanucleus-appengine/wiki/UpgradingToVersionTwo (I use Eclipse).
When I try to run the Enhancer Tool I get following error:
Exception in thread "main" Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL
"file:/.../eclipse/plugins/com.google.appengine.eclipse.sdkbundle_1.6.4.v201203300216r37/appengine-java-sdk-1.6.4/lib/opt/user/datanucleus/v2/datanucleus-core-3.0.6.jar" is already registered, and you are trying to register an identical plugin located at URL
"file:/.../eclipse/plugins/com.google.appengine.eclipse.sdkbundle_1.6.4.v201203300216r37/appengine-java-sdk-1.6.4/lib/opt/tools/datanucleus/v2/datanucleus-core-3.0.6.jar."
I formatted the message so that you could see the tiny difference, one jar is loaded from "user" directory, the other one from "tools" directory. I don't understand why. In the project build path, there is only the one from "user" and to the DataNucleus configuration I added the one from "tools", just like the howto above suggested.
In other cases I've seen around this message was mostly caused by conflicting versions of datanucleus plugin but it doesn't apply to me. I guess it's just some stupid thing in my case... so what am I doing wrong?
So after all, I didn't read the instructions as carefully as I thought. The problem was really that the jars were there twice, one in the project build path, one in the datanucleus configuration. It shouldn't be in the project build path (or in fact, it shouldn't be in one of them, doesn't matter which one). I added it there automatically when I copied libs to the war directory and I assumed it had to be done. But the instructions clearly say that only jdo-api needs to be in the project build path.
One thing I don't understand though. In one step of the instructions I had to uncheck "use project classpath when running tools" in the DataNuclues configuration. So how is it possible that the plugin was still using the libs configured in the project build path?

Resources