JUnit testing blueprint with openJPA in route - apache-camel

I have a test case that loads up the blueprint.xml and persistence.xml at the start of the JUnit but when the test actually runs an error is thrown for not having a persistence provider.
Caused by: javax.persistence.PersistenceException: No Persistence provider for EntityManager named blacklisting-pu
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:69)
at org.springframework.orm.jpa.LocalEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalEntityManagerFactoryBean.java:92)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:310)
at org.apache.camel.component.jpa.JpaEndpoint.createEntityManagerFactory(JpaEndpoint.java:255)
at org.apache.camel.component.jpa.JpaEndpoint.getEntityManagerFactory(JpaEndpoint.java:165)
at org.apache.camel.component.jpa.JpaEndpoint.validate(JpaEndpoint.java:248)
at org.apache.camel.component.jpa.JpaEndpoint.createProducer(JpaEndpoint.java:103)
at org.apache.camel.impl.ProducerCache.doGetProducer(ProducerCache.java:405)
here is the first part of the log showing the provider is loaded:
16 blacklisting-pu TRACE [main] openjpa.Runtime - Setting the following properties from "file:/workspace/git/jdbc-util/target/test-classes/META-INF/persistence.xml" into configuration: {openjpa.jdbc.SynchronizeMappings=buildSchema(SchemaAction='add,deleteTableContents'), openjpa.ConnectionPassword=, openjpa.ConnectionDriverName=org.h2.Driver, javax.persistence.provider=org.apache.openjpa.persistence.PersistenceProviderImpl, openjpa.MetaDataFactory=jpa(Types=com.entity.MyEntity), openjpa.Log=DefaultLevel=TRACE, Tool=INFO, PersistenceVersion=1.0, openjpa.ConnectionUserName=, openjpa.ConnectionURL=jdbc:h2:mem:blacklisting;DB_CLOSE_DELAY=1000, openjpa.Id=blacklisting-pu}
74 blacklisting-pu TRACE [main] openjpa.Runtime - No cache marshaller found for id org.apache.openjpa.conf.MetaDataCacheMaintenance.
119 blacklisting-pu TRACE [main] openjpa.Runtime - No cache marshaller found for id org.apache.openjpa.conf.MetaDataCacheMaintenance.
128 blacklisting-pu TRACE [main] openjpa.MetaData - Scanning resource "META-INF/orm.xml" for persistent types.
129 blacklisting-pu TRACE [main] openjpa.MetaData - The persistent unit root url is "null"
129 blacklisting-pu TRACE [main] openjpa.MetaData - parsePersistentTypeNames() found [com.entity.BlacklistingEntity].
129 blacklisting-pu TRACE [main] openjpa.MetaData - Found 1 classes with metadata in 9 milliseconds.
132 blacklisting-pu TRACE [main] openjpa.MetaData - Clearing metadata repository"org.apache.openjpa.meta.MetaDataRepository#1766bfd8".
I had the test case working using spring and defining the route in a spring camel-context.xml but then I needed to move the route to a blueprint.xml and now it can't seem to find the persistence.xml at runtime for the tests.
I have been using this as my reference along with alot of googleing:
http://camel.apache.org/blueprint-testing.html
Any help would be much appreciated
EDIT:
I have ran the following as part of the setup method in JUnit and the calss is found without issue
Object obj = Class.forName("org.apache.openjpa.persistence.PersistenceProviderImpl").newInstance();
if(null==obj){
Assert.fail("org.apache.openjpa.persistence.PersistenceProviderImpl not present on classpath.");
}else{
LOG.info("the class {} exists on the calsspath.",obj.getClass().getName());
}
here is the log showing it loaded
Creating service instance
Service created: org.apache.aries.blueprint.ext.impl.ExtNamespaceHandler#10a33ce2
Creating listeners
the class org.apache.openjpa.persistence.PersistenceProviderImpl exists on the calsspath.
Persistence.xml
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0">
<persistence-unit name="blacklisting-pu" transaction-type="RESOURCE_LOCAL">
<provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>
<class>com.entity.BlacklistingEntity</class>
<properties>
<property name="openjpa.ConnectionURL" value="jdbc:h2:mem:blacklisting;DB_CLOSE_DELAY=1000" />
<property name="openjpa.ConnectionDriverName" value="org.h2.Driver" />
<property name="openjpa.ConnectionUserName" value="" />
<property name="openjpa.ConnectionPassword" value="" />
<property name="openjpa.Log" value="DefaultLevel=TRACE, Tool=INFO" />
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(SchemaAction='add,deleteTableContents')" />
</properties>
</persistence-unit>
</persistence>

Related

Postgres : HikariPool-1 (This connection has been closed.). Possibly consider using a shorter maxLifetime value

I have a spring boot code setup designed to use Hikari Pool and Entity manager for querying PostgresSQL database.
Here are some things used for configuration:
persistence.xml:
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
version="2.1">
<persistence-unit name="EmF" transaction-type="RESOURCE_LOCAL">
<properties>
<property name="javax.persistence.jdbc.url" value="jdbc:postgresql://url"/>
<property name="javax.persistence.jdbc.user" value="postgres"/>
<property name="javax.persistence.jdbc.password" value="some_password"/>
<property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/> <!-- DB Dialect -->
<property name="hibernate.hbm2ddl.auto" value="update" /> <!-- create / create-drop / update -->
<property name="hibernate.show_sql" value="false" /> <!-- Show SQL in console -->
<property name="hibernate.format_sql" value="true" /> <!-- Show SQL formatted -->
<property name="hibernate.metadata_builder_contributor"
value="com.thermofisher.hercules.data.repository.SqlFunctionsMetadataBuilderContributor"/>
</properties>
</persistence-unit>
</persistence>
Repo class:
#Repository
public interface InputFileInfoRepository extends JpaRepository<InputFileInfo, Integer> {
#PersistenceContext
EntityManagerFactory emf = Persistence.createEntityManagerFactory("EmF");
#PersistenceContext
EntityManager em = emf.createEntityManager();
default InputFileInfo saveAndPersist(InputFileInfo inputFileInfo) {
EntityTransaction transaction = em.getTransaction();
if(!transaction.isActive()) {
em.getTransaction().begin();
}
em.persist(inputFileInfo);
transaction.commit();
return inputFileInfo;
}
When I run the application, it works fine and db operations are performed.
but if in between, connection to db gets lost on the system and when reconnected after sometime,
on performing db operations get the following error:
2022-06-21 16:23:13.757 WARN 8808 --- [nio-8080-exec-8] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#841b625 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-06-21 16:23:13.760 WARN 8808 --- [nio-8080-exec-8] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#150fb29f (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-06-21 16:23:13.762 WARN 8808 --- [nio-8080-exec-8] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#107ace85 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-06-21 16:23:13.765 WARN 8808 --- [nio-8080-exec-8] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#7f132fec (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-06-21 16:23:13.766 WARN 8808 --- [nio-8080-exec-8] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#3286a04f (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
2022-06-21 16:23:16.047 WARN 8808 --- [nio-8080-exec-8] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 08006
2022-06-21 16:23:16.048 ERROR 8808 --- [nio-8080-exec-8] o.h.engine.jdbc.spi.SqlExceptionHelper : An I/O error occurred while sending to the backend.
2022-06-21 16:23:16.066 INFO 8808 --- [nio-8080-exec-8] o.h.e.internal.DefaultLoadEventListener : HHH000327: Error performing load command
org.hibernate.exception.JDBCConnectionException: could not extract ResultSet
at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:112) ~[hibernate-core-5.6.1.Final.jar:5.6.1.Final]
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:37) ~[hibernate-core-5.6.1.Final.jar:5.6.1.Final]
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113) ~[hibernate-core-5.6.1.Final.jar:5.6.1.Final]
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99) ~[hibernate-core-5.6.1.Final.jar:5.6.
Tried setting maxLifetime value through code :
spring.datasource.max-lifetime=600000
but cant find its timeout equivalent in postgres db settings
Any help on this is appreciated

Failed to create session factory AMQ219007 when sending messages to AMQ Artemis from Camel JMS component

I try to send messages from the Camel rout JMS component, on some messages the error below occurs. Out of 50,000 messages, 4937 reached an average speed of 497 mes/sec. At the same time, with the same settings, the classic ActiveMQ gives about 10,000 mes/sec
Artemis version 2.11.0. Camel version 2.20.2
Error
Error while routing: Message has put to DEAD.LETTER.QUEUE
org.springframework.jms.UncategorizedJmsException: Uncategorized exception occurred during JMS processing; nested exception is javax.jms.JMSException: Failed to create session factory; nested exception is ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:316)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:169)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:487)
at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.send(JmsConfiguration.java:516)
at org.apache.camel.component.jms.JmsProducer.doSend(JmsProducer.java:440)
at org.apache.camel.component.jms.JmsProducer.processInOnly(JmsProducer.java:394)
at org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:157)
at org.apache.camel.processor.SendDynamicProcessor$1.doInAsyncProducer(SendDynamicProcessor.java:132)
at org.apache.camel.impl.ProducerCache.doInAsyncProducer(ProducerCache.java:445)
at org.apache.camel.processor.SendDynamicProcessor.process(SendDynamicProcessor.java:127)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97)
at org.apache.camel.processor.WireTapProcessor$1.call(WireTapProcessor.java:158)
at org.apache.camel.processor.WireTapProcessor$1.call(WireTapProcessor.java:153)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Suppressed: org.springframework.jms.UncategorizedJmsException: Uncategorized exception occurred during JMS processing; nested exception is javax.jms.JMSException: Failed to create session factory; nested exception is ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
... 19 more
Suppressed: org.springframework.jms.UncategorizedJmsException: Uncategorized exception occurred during JMS processing; nested exception is javax.jms.JMSException: Failed to create session factory; nested exception is ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
... 19 more
Suppressed: org.springframework.jms.UncategorizedJmsException: Uncategorized exception occurred during JMS processing; nested exception is javax.jms.JMSException: Failed to create session factory; nested exception is ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
... 19 more
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:886)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:299)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:294)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:474)
... 16 more
Caused by: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:799)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:884)
... 20 more
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:886)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:299)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:294)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:474)
... 16 more
Caused by: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:799)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:884)
... 20 more
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:886)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:299)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:294)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:474)
... 16 more
Caused by: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:799)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:884)
... 20 more
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:886)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:299)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:294)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:474)
... 16 more
Caused by: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:799)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:884)
... 20 more
Camel route
<?xml version="1.0" encoding="UTF-8"?>
<beans factor:name="Send to Artemis" factor:status="true"
xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:cxf="http://camel.apache.org/schema/cxf"
xmlns:factor="factor-schema"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd">
<camelContext errorHandlerRef="myDeadLetterErrorHandler"
id="e726891b-7413-4428-9bf5-f6c85116c771" xmlns="http://camel.apache.org/schema/spring">
<interceptFrom>
<bean method="updateMDC" ref="logInterceptorService"/>
</interceptFrom>
<route factor:name="Send to Artemis" id="route-4cd627f9-ba6d-43e3-ba24-4f61d8c1b69b">
<from id="35ecd8c3-ea1e-48ee-8d1e-85815576242c" uri="timer://init?delay=-1&repeatCount=50000">
<description>Timer</description>
</from>
<setBody factor:component="SetBodyEndpoint"
factor:custom-name="Установить тело сообщения"
factor:guid="endpoint-34546317-7707-4c92-9d08-c388ea6cc390" id="endpoint-34546317-7707-4c92-9d08-c388ea6cc390">
<simple><![CDATA[<?xml version="1.0" encoding="utf-8"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
<env:Header>
<props:MessageInfo xmlns:props="urn:cbr-ru:msg:props:v1.3">
<props:To>uic:5454</props:To>
<props:From>uic:newuser</props:From>
<props:AppMessageID>guid:1134f9d42bc98c84caea7ee62c17881312</props:AppMessageID>
<props:MessageID>guid:1429e234ae7016f981111361</props:MessageID>
<props:MessageType>1</props:MessageType>
<props:Priority>5</props:Priority>
<props:CreateTime>2016-07-27T12:41:13Z</props:CreateTime>
<props:LegacyTransportFileName>20191008 # pacs.008.001.08 # AAAACNBJXXX # BBBBRUMMYYY # 123456789.xml</props:LegacyTransportFileName>
<props:SendTime>2016-07-27T12:41:14Z</props:SendTime>
<props:AckRequest>false</props:AckRequest>
</props:MessageInfo>
<props:DocInfo xmlns:props="urn:cbr-ru:msg:props:v1.3">
<props:DocFormat>1</props:DocFormat>
<props:DocType>ED311</props:DocType>
<props:EDRefID EDNo="1" EDDate="2016-07-27" EDAuthor="1203709000" />
</props:DocInfo>
</env:Header>
<env:Body>
<sen:SigEnvelope xmlns:sen="urn:cbr-ru:dsig:env:v1.1">kk
</sen:SigEnvelope>
</env:Body>
</env:Envelope>]]></simple>
</setBody>
<setHeader factor:component="SetHeaderEndpoint"
factor:custom-name="Установить заголовки"
factor:guid="87ba2d3e-7eff-42cd-9efc-048764539364"
headerName="JMSDeliveryMode" id="87ba2d3e-7eff-42cd-9efc-048764539364">
<constant>NON_PERSISTENT</constant>
</setHeader>
<wireTap id="18382f84-1f34-487e-a55a-a731e1ec9560" uri="jms://TEST.FROM.CAMEL?connectionFactory=#RemoteArtemisMQ">
<description>JMS</description>
</wireTap>
</route>
</camelContext>
<bean
class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory"
factor:bean-type="DEFAULT" id="RemoteArtemisMQ" name="RemoteArtemisMQ">
<constructor-arg value="tcp://192.168.58.6:61619"/>
</bean>
</beans>
Artemis config
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 83.33 writes per millisecond
on the current journal configuration.
That translates as a sync write every 12000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>12000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>84000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61619?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
</core>
</configuration>
I pointed out for Artemis -Xmx6G, but it didn’t affect anything, only about 5-7% of the allocated memory is consumed
Please tell me why this error occurs and how to improve performance?
I had a different version of the artemis jms client in the project, after I strictly indicated which version to use, the error went away

Spring data #transactional not rolling back with SQL Server and after runtimeexception

I've enabled my spring application to use transactions and annotated my service method accordingly but the changes to my DB persist when a RuntimeException is thrown.
My Spring configuration looks like this:
<!-- Data Source. -->
<jee:jndi-lookup id="dataSource" jndi-name="java:/jdbc/BeheermoduleDS"/>
<!-- JPA Entity Manager. -->
<jee:jndi-lookup id="entityManagerFactory" jndi-name="java:/jpa/BeheermoduleDS"/>
<bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<tx:annotation-driven transaction-manager="txManager" />
My datasource configuration in my jboss' configuration file looks like this:
<datasource jta="true" jndi-name="java:/jdbc/BeheermoduleDS" pool-name="BeheermoduleDS" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>jdbc:sqlserver://localhost:1433;databaseName=Gebruikers;</connection-url>
<driver>sqljdbc</driver>
<security>
<user-name>jboss</user-name>
<password>*****</password>
</security>
</datasource>
My Service method looks like this:
#Transactional
public void authorise(Gebruiker user) {
user.setStatus(GebruikerStatus.Actief.name());
gebruikerRepo.save(user);
if (true) {
throw new RuntimeException("Exception happened just like that");
}
// does more stuff here that is never reached
}
My repository extends a spring data repository and looks like this:
public interface GebruikerRepository extends PagingAndSortingRepository<Gebruiker, Long>, QueryDslPredicateExecutor<Gebruiker> {
}
The transaction is thrown and caught by a controller which just shows a message to the user that an exception occurred. When I check my SQL Server DB, the change made to the user status have been commited.
Weren't they supposed to have been rolled back with the RuntimeException?
After turning debug on for org.springframework.transaction.interceptor I saw that no transactions are being started for my service method, but they are for a bunch of JpaRepository methods.
Also, this is how my persistence.xml looks like:
<persistence-unit name="BeheermodulePU" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<non-jta-data-source>java:/jdbc/BeheermoduleDS</non-jta-data-source>
Judging from the symptoms you describe you are scanning for the same classes twice. You probably have the same <context:component-scan /> in both the configuration of the ContextLoaderListener and DispatcherServlet.
You want the ContextLoaderListener to scan for everything but #Controller and the DispatcherServlet only for #Controllers. Leading to something like this.
For the ContextLoaderListener
<!-- Load everything except #Controllers -->
<context:component-scan base-package="com.myapp">
<context:exclude-filter expression="org.springframework.stereotype.Controller" type="annotation"/>
</context:component-scan>
For the DispatcherServlet
<!-- Load everything except #Controllers -->
<context:component-scan base-package="com.myapp" use-default-filters="false">
<context:include-filter expression="org.springframework.stereotype.Controller" type="annotation"/>
</context:component-scan>
See also #Service are constructed twice for another sample and broader explanation.

how to deal with datastore-indexes app engine?

for this example LINK
i try to make it but this exception occurred ??
com.google.appengine.api.datastore.DatastoreNeedIndexException: no matching index found.
The suggested index for this query is:
<datastore-index kind="Contact" ancestor="true" source="manual">
<property name="UserContacts_INTEGER_IDX" direction="asc"/>
</datastore-index>
how can i write these indexes manually ?? i try to make WEB-INF/datastore-indexes.xml
at this XML i write the following :
<?xml version="1.0" encoding="utf-8"?>
<datastore-indexes autoGenerate="true">
<datastore-index kind="Contact" ancestor="true" source="manual">
<property name="UserContacts_INTEGER_IDX" direction="asc"/>
</datastore-index>
but when Iam going to deploy this error stop me to continue deploying
An internal error occurred during: "Deploying store to Google".
XML error validating
So how can i get these indexes ??
another issues >>
when i run this code and add some properties at User class as
#Persistent (mappedBy = "userC")
public List<Contact> UserContacts =new ArrayList<Contact>();
and deploy it , the engine make indexes for the UserContacts however appear Exception due to the new property - same error above can't make indexes to them -
The best thing for you to do is run you app locally and perform some test queries on those entities.
As you have <datastore-indexes autoGenerate="true"> in your indexes config you should get a file called
WEB-INF/appengine-generated/datastore-indexes-auto.xml
in there you should get all the index definitions needed for your app to work.
Copy those index definitions to your WEB-INF/datastore-indexes.xml and update your app.
If you go to your cloud console and check on your Storage/datastore/indexes view you should be all those indexed either building or serving. Once all those indexes are on "serving" you should be good to go.

JPA does not write to table

I have the following JPA code, with all the values checked (ticket contains a valid bean, it ends without exception, etc.) It is executed, it does not throw any exceptions, yet in the end no data is written into the table.
I tried also retrieving a bean from the table, it also "works" (it is empty, so no data is returned).
The setup is
JBoss 6.1 Final
SQLServer 2008 Express (driver SQL JDBC 3 from MS)
The persistence code:
public String saveTicket() {
System.out.println("Controller saveTicket() ");
EntityManagerFactory factory = Persistence.createEntityManagerFactory("GesMan"); /* I know it would be better to share a single instance of factory, this is just for testing */
EntityManager entityMan = factory.createEntityManager();
entityMan.persist(this.ticket);
entityMan.close();
}
The persistence unit is
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="GesMan" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/GesManDS</jta-data-source>
<class>es.caib.gesma.gesman.Ticket</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.SQLServerDialect"/>
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.JBossTransactionManagerLookup"/>
<property name="hibernate.show_sql" value="true"/>
</properties>
</persistence-unit>
</persistence>
The datasource
<datasources>
<local-tx-datasource>
<jndi-name>GesManDS</jndi-name>
<connection-url>jdbc:sqlserver://spsigeswnt14.caib.es:1433;DatabaseName=TEST_GESMAN</connection-url>
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<user-name>thisis</user-name>
<password>notthepassword</password>
<check-valid-connection-sql>SELECT * FROM dbo.Ticket</check-valid-connection-sql>
<metadata>
<type-mapping>MS SQLSERVER</type-mapping>
</metadata>
</local-tx-datasource>
</datasources>
call entityMan.flush() or transaction.commit() befor closing it, otherwise it will discard all changes queued on close.
In the end it looks like I was using the wrong approach.... In JBoss you can`t (better said, I could not get to) access JPA directly (as you would do in JSE).
I ended creating an EJB (with transactions) and passing all JPA logic there.
PS: Of course, if I am wrong please tell me (now it is more of an academic issue, but still I want to know)

Resources