Efficient way to connect to database (by performance) - database

I want to build a programm, which connects to a database. Inprinciple, my code works. I use "Hibernante-4.3.1" and a postgresql-driver "postgresql-9.3-1100.jdbc41.jar".
My persistence.xml looks like this:
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence" version="1.0">
<persistence-unit name="******" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/>
<property name="hibernate.connection.driver_class" value="org.postgresql.Driver"/>
<property name="hibernate.connection.username" value="******"/>
<property name="hibernate.connection.password" value="******"/>
<property name="hibernate.connection.url" value="jdbc:postgresql://localhost:5432/*******"/>
<property name="hibernate.hbm2ddl.auto" value="create"/>
</properties>
</persistence-unit>
</persistence>
For localhost, it's okeyishly fast, but if I want to connect to a external server via internet, it takes about 30-60 seconds to establish the connection. Once it is initialised, all subsequent requests are executed fast enough, but the first call is taking way to long.
I could restructure the whole project as a WEB-Project and make a JBoss Datasource via JTA. That way, the connection is established before the programm starts and all would be fine. But I'd would like it a lot more to have if I didn't have to do that. What's the right way to connect like this?
Edit: Some more information: The line which takes the long time is:
javax.persistence.EntityManagerFactory emf = Persistence.createEntityManagerFactory("OneGramService");
Greetings,
Rhodarus

Try to set hibernate.temp.use_jdbc_metadata_defaults property to false.

Related

Multiple camel blueprints in one OSGi bundle?

I'm trying to figure out if I should package multiple blueprint.xml files in a single OSGi bundle that I want to deploy into karaf. Each blueprint.xml file has one camel context. I've tried to just throw all my blueprints into the OSGI-INF/blueprint folder, but I got an error saying
Name 'jms' is already in use by a registered component
That seems to make sense, because I do this in every blueprint.xml
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://0.0.0.0:61616"/>
<property name="userName" value="karaf"/>
<property name="password" value="karaf"/>
</bean>
</property>
</bean>
Should I even do that? Or would it be better for each CamelContext to be it's own bundle? I've seen this https://camel.apache.org/manual/latest/faq/why-use-multiple-camelcontext.html and it says that multiple CamelContexts would make sense when they're deployed as isolated bundles. So what's the best practice here:
Each CamelContext with it's own blueprint.xml, bundled with the necessary beans into an osgi-bundle?
One bundle for all necessary beans, and just drop the blueprint.xml files in karaf's deploy folder?
One CamelContext which imports all the other CamelContexts, bundled with all necessary beans?
This is more of a question about service design, not Camel.
Since a bundle is a deployment unit, I would first of all look at the different lifecycles of your code.
If some things must always be deployed together, if they cannot evolve individually, you can make one bundle containing them. Simply because, in terms of releasing and deployment, you cannot profit from dividing the code into smaller units.
On the other hand, if something is evolving faster or slower and must therefore be deployed more often (or less often), you should put it in its own bundle.
This way you are able to deploy code only when it really changes. In contrast to a giant EAR file to deploy a big monolithic application when only a small bugfix was implemented.
So, in summary, you can use more or less the microservice principles to "cut" your code into units.

What can cause high number of pending messages with no slow consumer?

I use ActiveMQ with Apache Camel.
Right now I'm experiencing this issue where in ActiveMQ there are high number of pending messages. The messages are stuck in pending state and the dequeue process is very slow.
But looks like it's not add up to much on dispatched count of each consumers.
Is my understanding correct that normally to have that much pending messages the size of dispatched queue of each consumers should already be nearer to the default prefetch limit (which is 1000)? But it's just 20-80 for each consumers?
I don't have much knowledge about ActiveMq. So where should I look to have any idea how to solve this issue?
Connection Configuration
01 is the active one, and 02 is in standby mode
failover:(tcp://mq01:61616,tcp://mq02:61616)
Connection Factory
First one is for most of the queues, the second one is dedicated to the task with lots of load.
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
<property name="connectionFactory" ref="my-connectionFactory" />
<property name="idleTimeout" value="0"/>
<property name="maxConnections" value="5" />
</bean>
<bean id="consumerPooledConnectionFactory"
class="org.apache.activemq.pool.PooledConnectionFactory" init-method="start" destroy-method="stop">
<property name="maxConnections" value="2" />
<property name="connectionFactory" ref="my-connectionFactory" />
</bean>
I want to update that I have found that, in my case, the issue is because I have ActiveMQ's connection pool that is shared between both the ActiveMQ producer and consumer, and this somehow creates competition for available connections from the pool.
So the lesson learned is to always separate connection pool between producer and consumer.

How to prevent a datasource from loading

I have two datasource in my app-ds.xml file. I want only one to load at a time. Because loading both will take much cpu resources. It means that I will have a flag somewhere that will determine which database should load. Both these database will contain roughly the same data, the only difference is one is live(which is used by other applications as well), and the other one is a local copy(we can modify everything here). Please note that separating the database into different environments is not the answer we hope for. Because we have both databases for every environments(most likely DEV and TEST)
Any idea on how I should do this will be very helpful.
<datasources>
<datasource jndi-name="java:/jdbc/dataSource/database1" pool-name="database1">
<connection-url>jdbc:sybase:Tds:host:port/schema</connection-url>
<driver>sybase</driver>
<pool>
<prefill>true</prefill>
<use-strict-min>false</use-strict-min>
<flush-strategy>FailingConnectionOnly</flush-strategy>
<min-pool-size>10</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
<security>
<user-name>user</user-name>
<password>password</password>
</security>
</datasource>
<datasource jndi-name="java:/jdbc/dataSource/database2" pool-name="database2">
<connection-url>jdbc:sybase:Tds:host:port/schema</connection-url>
<driver>sybase</driver>
<pool>
<prefill>true</prefill>
<use-strict-min>false</use-strict-min>
<flush-strategy>FailingConnectionOnly</flush-strategy>
<min-pool-size>10</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
<security>
<user-name>user</user-name>
<password>password</password>
</security>
</datasource>
</datasources>
In JBoss, Datasources always bound and register the persistent context when deploying.
It does not take much memory when deploying.
You must remove the local datasource when your application going to production.

How to connect to several databases using MyBatis Spring Integration?

I use MyBatis with Spring Integration in my application. We have several Oracle databases in our company. One query must be executed in one database, another must be executed in other database. How to configure MyBatis to use different database connections to defferent queries?
That's one of the first topics covered in MyBatis 3 User Guide. Basically you should have couple XML configuration files for each database. And the most simple way would be to create mappers by passing configuration
String resource = "org/mybatis/example/Configuration.xml";
Reader reader = Resources.getResourceAsReader(resource);
sqlMapper = new SqlSessionFactoryBuilder().build(reader);
EDIT:
Sorry, wasn't carefully reading. Anyway, I believe code snipet is self explanatory:
<jee:jndi-lookup id="jndiDatabase1" jndi-name="jdbc/database1"/>
<jee:jndi-lookup id="jndiDatabase2" jndi-name="jdbc/database2"/>
<bean id="database1" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="configLocation" value="classpath:/some/path/to/database1Config.xml"/>
<property name="dataSource" ref="jndiDatabase1"/>
</bean>
<bean id="database2" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="configLocation" value="classpath:/some/path/to/database2Config.xml"/>
<property name="dataSource" ref="jndiDatabase2"/>
</bean>
If one is looking for supporting different type of databases, my answer is just for that.
As of Mybatis 3, It supports multi-data internally. for a detail configuration refer to official documentation at here.
The following is how to config it with Spring
<bean id="vendorProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="properties">
<props>
<prop key="SQL Server">sqlserver</prop>
<prop key="DB2">db2</prop>
<prop key="Oracle">oracle</prop>
<prop key="MySQL">mysql</prop>
</props>
</property>
</bean>
<bean id="databaseIdProvider" class="org.apache.ibatis.mapping.VendorDatabaseIdProvider">
<property name="properties" ref="vendorProperties"/>
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="databaseIdProvider" ref="databaseIdProvider" />
</bean>

Google App Engine JDO and Strong Consistancy

I have been experimenting with GAE (1.7.0) for a couple of weeks now and I am having some issues with STRONG consistancy.
I have been researching the issue, but I am still unclear.
Is some able to definately say that if using JDO within GAE then the consistancy will be EVENTUAL.
The only way to achieve STRONG consistancy is not use JDO and to use the GAE entity classes with Ancestry.
At this stage I dont know if it is my code at fault or just not supported within the environment. In any case I am losing my fragile little mind :-)
My jdoconfig.xml file
<?xml version="1.0" encoding="utf-8"?>
<jdoconfig xmlns="http://java.sun.com/xml/ns/jdo/jdoconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://java.sun.com/xml/ns/jdo/jdoconfig">
<persistence-manager-factory name="transactions-optional">
<property name="javax.jdo.PersistenceManagerFactoryClass"
value="org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManagerFactory"/>
<property name="javax.jdo.option.ConnectionURL" value="appengine"/>
<property name="javax.jdo.option.NontransactionalRead" value="true"/>
<property name="javax.jdo.option.NontransactionalWrite" value="true"/>
<property name="javax.jdo.option.RetainValues" value="true"/>
<property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/>
<property name="datanucleus.appengine.datastoreReadConsistency" value="STRONG" />
</persistence-manager-factory>
Thanks
I do not think that you can be assured of consistency by specifying the datastoreReadConsistency to STRONG in the jdoconfig.xml file.
Google App Engine's High Replication Datastore (HRD) is now the default data repository for App Engine applications. This model is guaranteed to work for eventual consistency only.
What you have mentioned is correct and also as per the documentation, which states that "To obtain strongly consistent query results, you need to use an ancestor query limiting the results to a single entity group."
See note : https://developers.google.com/appengine/docs/java/datastore/structuring_for_strong_consistency

Resources