Not able to start the bundle in servicemix - apache-camel

I have a bundle up and running in Servicemix. I went to my company's repository and downloaded the corresponding JAR to my local machine. I extracted that JAR and found out that this JAR had only one folder META-INF.
Inside this folder, there is a Manifest.mf file and my resources such as Spring configuration file and Camel Context file.
there I got my first question: where are the source files of this JAR i.e. JAVA classes and all. Only thing I saw there was manifest file, pom.xml, another pom properties file and couple of other configuration files for spring and camel.
this led to my next step. I had a local copy of this project in my workspace as well. I build this project locally and found the JAR in target directory of the project.
Now following steps might seem silly but anyway I did little experiment. I extracted this JAR which I found in target and extracted it to see the content. I believed it was a bundle because I used maven-bundle-plugin and there is no way you could tell by looking at a JAR that its just a JAR or an OSGI bundle. ok so I extracted the JAR and guess what this time it did have the compiled java classes.
this is not the end, I did something silly again. I removed the compiled classes from this JAR and made it exactly same as which I copied from my Company's central repository. Now I used a JDK's JAR creation utility to create a JAR.
Now I have two JARS:
one which I downloaded from company's central repo.
another one which I created myself. it has exactly same content as the other one. I even used the same manifest.mf while creating this JAR. (Since I knew Manifest is the backbone of an oSGI Bundle).
I secure copied this bundle in my server's home directory. and finally, I installed this Bundle/JAR in Servicemix using :
install file:path_to_JAR/JAR_FILE_NAME.
it got installed successfully. but when I tried to start this bundle. it could not start. by using display-exception, I saw the exception : it wasnt able to load the beans and could not
initialize the Application Context followed by a more specific exception "ClassNotFound" exception. I understand that it wasnt able to find the classes defined in my application context. BUT WHYYYYYYYYYY?
I did exactly same steps and I checked it multiple times. if mine could not start, why the earlier one is up and running.
It might sound silly for others who have worked in OSGI environment, But now I am starting to re consider especially ServiceMix.
Thanks for any suggestion.

This is nothing about OSGi, it's more something about your application.
As I don't know your project I just can do some assumptions.
First the jar you got from the Company Repository is most likely an "older" version and not the same as your local sources. With Servicemix it's quite possible to just have blueprint or spring xmls in your bundle cause those are valid resources a Camel-Blueprint/Spring extender are able to pick up. Those XMLs are interpreted and if those only make use of standard Camel Components there is no reason to have a single Class inside your bundle.
Now back to your newly created Bundle, obviously you have some new "Code" in your camel-xml which requires not only standard Camel classes but also some Processes you created on your own, now those classes need to stay in the Bundle!
Best just deploy the newly created Bundle with all it's classes. You should rather check what has changed in the camel xml files.

Related

How to add a service worker to an existing, old, react project?

I'm working on an old react project, which I need to add functionality to, but when deploying the react build on the server, it fails, claiming it cannot find several css and js files, although I published all files within the build folder. I tried different things:
First, I kept the old service-worker.js in the production folder the IIS uses, but replaced everything else.
Then, I tried also deleting the service-worker.js, since I thought it was optional, and my npm run build didn't create a service-worker.js file.
Then, I tried copying the service-worker.js file that existed on production, and manually changing it to point to my css and js files in the /static/ folder of my build folder.
All of these solutions have yielded the same result. So I have a few questions:
Is the service worker necessary? If not, could this error relate to something entirely different other than the service worker?
If it is necessary, why could my npm run build command not create the service worker with the rest of the files in the build folder?
If I do need it, how can I manually add it to a project that already exists?
If the production folder already had a service worker, and my build is not building it, I can also assume maybe my react version is newer, but I find that odd, since the computer I use is one an older employee in my company used, and I didn't manually change anything about this project.

Felix File Install OSGi: How to automatically load a bundle (Configuration and how it works)?

I was looking for a way to load and start OSGi Bundles on the fly during runtime into my system. However after finding Felix File Install, I thought this is problably the most elegant and easy working way.
Thing is: It's not working. ;-)
I downloaded the Felix File Install Jar and deployd it as an OSGi Bundle into my software.
It also starts with all my OSGi Bundles in Eclipse without any problems. However, I don't know where I should set the properties file (Tried to put them in the arguments box at eclipse's project properties. no success though).
Furthermore my Bundle isn't reacting when something in the directory is changing. Even when creating the default directory and manipulating its content, nothing happens. No Bundles get loaded. Somehow I have the feeling I have overseen something huge here, since it seems to work for most people just perfect with not that mch effort?
Would be really glad for your help.
Bye
NOTE: There is no Apache Felix installed. Only the felix file install jar. The OSGi is running on equinox...
They are system properties not program arguments for equinox. Use it like this :
-Dfelix.fileinstall.dir=/pickup
As per their documentation the default value for this is ./load.
NOTE: In eclipse add the above in the VM arguments

DataNucleus libraries and maven-gae-plugin

I'm using maven-gae-plugin to manage a Google AppEngine project but I don't know how to include the libraries required to use JPA.
Google's documentation says:
The classpath must contain the JARs 'datanucleus-core-*.jar', 'datanucleus-jpa-*', 'datanucleus-enhancer-*.jar', 'asm-*.jar', and 'geronimo-jpa-*.jar' (where * is the appropriate version number of each JAR) from the 'appengine-java-sdk/lib/tools/' directory, as well as all of your data classes.
How can I tell the plugin to put all the jars in the classpath?
So far I just edited the pom.xml file setting gae.version to 1.7.3 (Leaving datanucleus.version to 1.1.5 and I run mvn gae:unpack but I cannot get it to work.
First, I have problems with javax.persistance that is not found. Do I have to add it manually to pom.xml?
If I do it, the development server starts, but I cannot work with the storage: I get the following error:
SEVERE: Found Meta-Data for class com.sharecost.entities.User but this class is not enhanced!! Please enhance the class before running DataNucleus.
I found a solution to the second part of my question. Looking at the POM.xml file I discovered that the all entities are supposed to be in a **/model package.
I still don't know if the manual inclusion of the javax.persistence dependency is actually required.

DataNucleus Enhancer doesn't work

I'm writing a web app using Google AppEngine and Spring MVC. I carefully upgraded to the v2 of the DataNucleus pluging by following these steps: http://code.google.com/p/datanucleus-appengine/wiki/UpgradingToVersionTwo (I use Eclipse).
When I try to run the Enhancer Tool I get following error:
Exception in thread "main" Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL
"file:/.../eclipse/plugins/com.google.appengine.eclipse.sdkbundle_1.6.4.v201203300216r37/appengine-java-sdk-1.6.4/lib/opt/user/datanucleus/v2/datanucleus-core-3.0.6.jar" is already registered, and you are trying to register an identical plugin located at URL
"file:/.../eclipse/plugins/com.google.appengine.eclipse.sdkbundle_1.6.4.v201203300216r37/appengine-java-sdk-1.6.4/lib/opt/tools/datanucleus/v2/datanucleus-core-3.0.6.jar."
I formatted the message so that you could see the tiny difference, one jar is loaded from "user" directory, the other one from "tools" directory. I don't understand why. In the project build path, there is only the one from "user" and to the DataNucleus configuration I added the one from "tools", just like the howto above suggested.
In other cases I've seen around this message was mostly caused by conflicting versions of datanucleus plugin but it doesn't apply to me. I guess it's just some stupid thing in my case... so what am I doing wrong?
So after all, I didn't read the instructions as carefully as I thought. The problem was really that the jars were there twice, one in the project build path, one in the datanucleus configuration. It shouldn't be in the project build path (or in fact, it shouldn't be in one of them, doesn't matter which one). I added it there automatically when I copied libs to the war directory and I assumed it had to be done. But the instructions clearly say that only jdo-api needs to be in the project build path.
One thing I don't understand though. In one step of the instructions I had to uncheck "use project classpath when running tools" in the DataNuclues configuration. So how is it possible that the plugin was still using the libs configured in the project build path?

Maven Plugin Working Directory Not Constant

I wrote a Maven Plugin that creates some XML files on the classpath of my project. The Maven Project is fairly complex and has one master project with many sub projects (think services for a larger application).
The plugin takes a directory argument in the pom.xml, which is something relative to the classpath like this:
<docDestination>src/main/webapp/static/</docDestination>
However, when I try to access this folder via new File(docDestination), the resulting directory depends on the project (or sub-project) from which I ran the mvn install command that triggered the plugin.
The plugin is only specified in the pom.xml of one of the sub-projects, but if I run mvn-install from the parent it creates the XML files in the src/main/... folder of the parent application. How do I get the plugin to use the filesystem of the project in which it is declared rather than the filesystem of the parent project?
I should note that if I navigate to the sub-project in Terminal and run mvn install in that directory the files are created in the right place, which explains the title of my post.
Use the ${basedir} variable:
<docDestination>${basedir}/src/main/webapp/static</docDestination>
This should use the basedir currently used by the respective module (regardless of whether this is the top-level or a sub-module).

Resources