I'm trying to figure out if I should package multiple blueprint.xml files in a single OSGi bundle that I want to deploy into karaf. Each blueprint.xml file has one camel context. I've tried to just throw all my blueprints into the OSGI-INF/blueprint folder, but I got an error saying
Name 'jms' is already in use by a registered component
That seems to make sense, because I do this in every blueprint.xml
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://0.0.0.0:61616"/>
<property name="userName" value="karaf"/>
<property name="password" value="karaf"/>
</bean>
</property>
</bean>
Should I even do that? Or would it be better for each CamelContext to be it's own bundle? I've seen this https://camel.apache.org/manual/latest/faq/why-use-multiple-camelcontext.html and it says that multiple CamelContexts would make sense when they're deployed as isolated bundles. So what's the best practice here:
Each CamelContext with it's own blueprint.xml, bundled with the necessary beans into an osgi-bundle?
One bundle for all necessary beans, and just drop the blueprint.xml files in karaf's deploy folder?
One CamelContext which imports all the other CamelContexts, bundled with all necessary beans?
This is more of a question about service design, not Camel.
Since a bundle is a deployment unit, I would first of all look at the different lifecycles of your code.
If some things must always be deployed together, if they cannot evolve individually, you can make one bundle containing them. Simply because, in terms of releasing and deployment, you cannot profit from dividing the code into smaller units.
On the other hand, if something is evolving faster or slower and must therefore be deployed more often (or less often), you should put it in its own bundle.
This way you are able to deploy code only when it really changes. In contrast to a giant EAR file to deploy a big monolithic application when only a small bugfix was implemented.
So, in summary, you can use more or less the microservice principles to "cut" your code into units.
Related
Please forgive my ignorance, but I can't find any resources describing to how to obtain a reference to an OSGi declared datasource without hardcoding the name in the Spring XML definition.
I'm using Talend ESB SE Runtime (6.5.1), and trying to create a route that will be reused with different osgi data sources as the referenced datasource for the route.
If, in the Spring configuration I declare
<osgi:reference id="dataSource" interface="javax.sql.DataSource" filter="(osgi.jndi.service.name=myDataSourceName)" />
this works. However, I can't see any way of parameterising this, since when I try using
<osgi:reference id="dataSource" interface="javax.sql.DataSource" filter="(osgi.jndi.service.name=${app.datasource.name})" />
the karaf log complains that it can't find a service called ${app.datasource.name} which it clearly isn't going to find.
If parameters can't be used in the filter for osgi references, then I could configure this in Java, but I can't see anywhere how I get from a Camel Context registry to the underlying OSGi registry - which the osgi:reference element does in the Spring XML.
If anyone can point me in the right direction here, that would be great, since I suspect I may be misunderstanding how the various components function.
Thanks!
I think it's problem of configuring property placeholder, cause there must be value of your app.datasource.name property in log, not property name. Try to use spring xml config like this.
I was recently looking at how to set up caching with JDO on Appengine, and there doesn't seem to be any good documentation. How do you make use of Memcache to avoid unnecessary hits to the datastore? I am using Android Studio / Gradle.
After some poking around, I came up with the following working solution.
Add the following lines to config files:
jdoconfig.xml
<property name="datanucleus.cache.level2.type" value="jcache"/>
<property name="datanucleus.cache.level2.cacheName" value="Anything"/>
build.gradle (for appengine module)
compile 'net.sf.jsr107cache:jsr107cache:1.1'
compile 'com.google.appengine:appengine-jsr107cache:1.9.17'
compile 'org.datanucleus:datanucleus-cache:3.1.3'
Of course, your mileage may vary, depending on your specific JDO setup.
I want to build a programm, which connects to a database. Inprinciple, my code works. I use "Hibernante-4.3.1" and a postgresql-driver "postgresql-9.3-1100.jdbc41.jar".
My persistence.xml looks like this:
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence" version="1.0">
<persistence-unit name="******" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/>
<property name="hibernate.connection.driver_class" value="org.postgresql.Driver"/>
<property name="hibernate.connection.username" value="******"/>
<property name="hibernate.connection.password" value="******"/>
<property name="hibernate.connection.url" value="jdbc:postgresql://localhost:5432/*******"/>
<property name="hibernate.hbm2ddl.auto" value="create"/>
</properties>
</persistence-unit>
</persistence>
For localhost, it's okeyishly fast, but if I want to connect to a external server via internet, it takes about 30-60 seconds to establish the connection. Once it is initialised, all subsequent requests are executed fast enough, but the first call is taking way to long.
I could restructure the whole project as a WEB-Project and make a JBoss Datasource via JTA. That way, the connection is established before the programm starts and all would be fine. But I'd would like it a lot more to have if I didn't have to do that. What's the right way to connect like this?
Edit: Some more information: The line which takes the long time is:
javax.persistence.EntityManagerFactory emf = Persistence.createEntityManagerFactory("OneGramService");
Greetings,
Rhodarus
Try to set hibernate.temp.use_jdbc_metadata_defaults property to false.
I have a running JavaEE web application (WAR) whose Entities will be changed in the next version of the application. This also means structure changes in the underlying database of course.
What is the best way to keep your old data and migrate to the new Entity structure after an application rewrite?
Do I have to manually change the database structure before redeploying or are there other ways?
In EclipseLink 3.4, adding persistence properties to you persitence.xml.
<property name="eclipselink.ddl-generation" value="create-or-extend-tables" />
<property name="eclipselink.ddl-generation.output-mode" value="database" />
This feature is to allow creating database tables and modify any that already exist so they match the object model.
you would like to version control your upgrade and degrade db script with your codebase. your upgrade scripts should be able to assure the safety of your existing data. and you should always do a back up before you apply your changes. Try liquibase.
I have been experimenting with GAE (1.7.0) for a couple of weeks now and I am having some issues with STRONG consistancy.
I have been researching the issue, but I am still unclear.
Is some able to definately say that if using JDO within GAE then the consistancy will be EVENTUAL.
The only way to achieve STRONG consistancy is not use JDO and to use the GAE entity classes with Ancestry.
At this stage I dont know if it is my code at fault or just not supported within the environment. In any case I am losing my fragile little mind :-)
My jdoconfig.xml file
<?xml version="1.0" encoding="utf-8"?>
<jdoconfig xmlns="http://java.sun.com/xml/ns/jdo/jdoconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://java.sun.com/xml/ns/jdo/jdoconfig">
<persistence-manager-factory name="transactions-optional">
<property name="javax.jdo.PersistenceManagerFactoryClass"
value="org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManagerFactory"/>
<property name="javax.jdo.option.ConnectionURL" value="appengine"/>
<property name="javax.jdo.option.NontransactionalRead" value="true"/>
<property name="javax.jdo.option.NontransactionalWrite" value="true"/>
<property name="javax.jdo.option.RetainValues" value="true"/>
<property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/>
<property name="datanucleus.appengine.datastoreReadConsistency" value="STRONG" />
</persistence-manager-factory>
Thanks
I do not think that you can be assured of consistency by specifying the datastoreReadConsistency to STRONG in the jdoconfig.xml file.
Google App Engine's High Replication Datastore (HRD) is now the default data repository for App Engine applications. This model is guaranteed to work for eventual consistency only.
What you have mentioned is correct and also as per the documentation, which states that "To obtain strongly consistent query results, you need to use an ancestor query limiting the results to a single entity group."
See note : https://developers.google.com/appengine/docs/java/datastore/structuring_for_strong_consistency