I'm trying to make unit tests work in H2 for the DB2 function TIMESTAMP(Date, int) which sets the precision for the timestamp.
A similar question led me to
http://www.h2database.com/html/features.html#user_defined_functions
but I'm not sure how the overloading would work in this scenario.
Can anyone give me an example?
( Also, please don't answer just to say that having different databases for testing and production is bad practice. It's not something I'll be able to change :/ )
hi i had a native query with VALUE and INT functions i did the following steps:
1) Add on before the custom functions
#Before
public void addCustomFunctions() throws Exception{
Context context = new InitialContext();
DataSource dataSource =(DataSource) context.lookup("java:jboss/datasources/ejb3-junit-arguillian-exampleTestDS");
Connection conn = dataSource.getConnection("sa","sa");
Statement st = conn.createStatement();
st.execute("CREATE ALIAS INT FOR \"com.ec.custom.IntFunction.transformInt\"");
}
2) Add the class with a static public method
public class IntFunction {
public static Integer transformInt(Object object) {
Integer result = 0;
result = Integer.parseInt(object.toString());
return result;
}
}
3) The jndi for the datasource must exist on
java:jboss/datasources/ejb3-junit-arguillian-exampleTestDS
my persistence.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
JBoss, Home of Professional Open Source
Copyright 2013, Red Hat, Inc. and/or its affiliates, and individual
contributors by the #authors tag. See the copyright.txt in the
distribution for a full listing of individual contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<persistence version="2.0"
xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="primary">
<!-- We use a different datasource for tests, so as to not overwrite
production data. This is an unmanaged data source, backed by H2, an in memory
database. Production applications should use a managed datasource. -->
<!-- The datasource is deployed as WEB-INF/test-ds.xml,
you can find it in the source at src/test/resources/test-ds.xml -->
<jta-data-source>java:jboss/datasources/ejb3-junit-arguillian-exampleTestDS</jta-data-source>
<properties>
<!-- Properties for Hibernate -->
<property name="hibernate.hbm2ddl.auto" value="create-drop" />
<property name="hibernate.show_sql" value="false" />
</properties>
</persistence-unit>
</persistence>
You can follow the guide of arquillian its what i used.
https://github.com/monk8800/ejb3-junit-arguillian-example
Related
Could anyone let me know if one publisher can be used for passing different parameters (Data Types), like integer, float, string, char etc...
Does Open Splice DDS Community edition have any limitations for publishers? If so how many publishers can it accommodate?
<OpenSplice>
<Domain>
<Name>ospl_sp_ddsi</Name>
<Id>0</Id>
<SingleProcess>true</SingleProcess>
<Service name="ddsi2">
<Command>ddsi2</Command>
</Service>
<Service name="durability">
<Command>durability</Command>
</Service>
<Service enabled="false" name="cmsoap">
<Command>cmsoap</Command>
</Service>
</Domain>
<DDSI2Service name="ddsi2">
<General>
<NetworkInterfaceAddress>192.168.147.179</NetworkInterfaceAddress>
<AllowMulticast>true</AllowMulticast>
<EnableMulticastLoopback>true</EnableMulticastLoopback>
<CoexistWithNativeNetworking>false</CoexistWithNativeNetworking>
</General>
<Compatibility>
<!-- see the release notes and/or the OpenSplice configurator on DDSI interoperability -->
<StandardsConformance>lax</StandardsConformance>
<!-- the following one is necessary only for TwinOaks CoreDX DDS compatibility -->
<!-- <ExplicitlyPublishQosSetToDefault>true</ExplicitlyPublishQosSetToDefault> -->
</Compatibility>
</DDSI2Service>
<DurabilityService name="durability">
<Network>
<Alignment>
<TimeAlignment>false</TimeAlignment>
<RequestCombinePeriod>
<Initial>2.5</Initial>
<Operational>0.1</Operational>
</RequestCombinePeriod>
</Alignment>
<WaitForAttachment maxWaitCount="10">
<ServiceName>ddsi2</ServiceName>
</WaitForAttachment>
</Network>
<NameSpaces>
<NameSpace name="defaultNamespace">
<Partition>*</Partition>
</NameSpace>
<Policy alignee="Initial" aligner="true" durability="Durable" nameSpace="defaultNamespace"/>
</NameSpaces>
</DurabilityService>
<TunerService name="cmsoap">
<Server>
<PortNr>none</PortNr>
</Server>
</TunerService>
</OpenSplice>
A DDS-publisher can have multiple writers of different topics where each topic-type can include various parameters of various types (including bounded and unbounded types such as arrays and sequences).
The OpenSplice CE (Community Edition) doesn't have any limitation for publishers, but when you want to run more than 10 applications on a single machine you have to change the DDSI/Discovery/ParticipantIndex parameter from its default 'auto' value to 'none', see also this post: http://forums.opensp...index#entry4024
Cheers
I used Android Studio to create both an Android project, and its backend AppEngine Endpoints counterpart. I have a datastore for which I am using Objectify. The system worked great, until I added a filter to my Query (to show only specific given emails).
Query<Report> query = ofy().load().type(Report.class).filter("email", user.getEmail()).order("email").order("-when").limit(limit);
This is the POJO Datastore Entity:
#Entity
public class Report {
#Id
Long id;
String who;
#Index
Date when;
String what;
#Index
String email;
}
However, I receive such an error from the Google API Explorer when I attempt to test it:
com.google.appengine.api.datastore.DatastoreNeedIndexException: no matching index found.
The suggested index for this query is:
<datastore-index kind=\"AccessReport\" ancestor=\"false\" source=\"manual\">
<property name=\"email\" direction=\"asc\"/>
<property name=\"when\" direction=\"desc\"/>
</datastore-index>
As I understand it, I simply need to create a composite index including the specific fields email and when, with their specific sort direction.
However, most documentation that I find tell me to edit datastore-indexes.xml.
App Engine predefines a simple index on each property of an entity. An
App Engine application can define further custom indexes in an index
configuration file named datastore-indexes.xml, which is generated in
your application's /war/WEB-INF/appengine-generated directory.
Unfortunately, this file does not seem to exist anywhere in my project.
Is anyone familiar with the way to change this when working with Android Studio?
Create datastore-indexes.xml file and put it in /WEB-INF/ folder. The content will look like this:
<datastore-indexes
autoGenerate="true">
<datastore-index kind=\"AccessReport\" ancestor=\"false\" source=\"manual\">
<property name=\"email\" direction=\"asc\"/>
<property name=\"when\" direction=\"desc\"/>
</datastore-index>
</datastore-indexes>
I have the following scenario:
I have an OSGI bundle that has a service reference defined in the blueprint XML that references an interface in a remote bundle, and a bean that uses one of the impl's methods to populate a Properties object.
Relevant snippet from Bundle #1's XML (the consumer):
...
<!-- other bean definitions, namespace stuff, etc -->
<!-- reference to the fetching service -->
<reference id="fetchingService" interface="company.path.to.fetching.bundle.FetchingService" />
<!-- bean to hold the actual Properties object: the getConfigProperties method is one of the overridden interface methods -->
<bean id="fetchedProperties" class="java.util.Properties" factory-ref="fetchingService" factory-method="getProperties" />
<camelContext id="contextThatNeedsProperties" xmlns="http://camel.apache.org/schema/blueprint">
<propertyPlaceholder id="properties" location="ref:fetchedProperties" />
...
<!-- the rest of the context stuff - routes and so on -->
</camelContext>
Remote Bundle's blueprint.xml:
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:camel="http://camel.apache.org/schema/blueprint"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">
<cm:property-placeholder id="config-properties" persistent-id="company.path.configfetcher" />
<bean id="fetchingService" class="company.path.to.fetching.bundle.impl.FetchingServiceImpl" scope="singleton" init-method="createLoader" depends-on="config-properties">
<property name="environment" value="${environment}" />
<property name="pathToRestService" value="${restPath}" />
</bean>
<service ref="fetchingService" interface="company.path.to.fetching.bundle.FetchingService" />
<!-- END TESTING -->
</blueprint>
From Impl Class:
public synchronized Properties getProperties() {
if(!IS_RUNNING) {
// timer task that regularly calls the REST api to check for updates
timer.schedule(updateTimerTask, 0, pollInterval);
IS_RUNNING = true;
}
//Map<String, Properties> to return matching object if it's there
if(PROPERTIES_BY_KEY.containsKey(environment)) {
return PROPERTIES_BY_KEY.get(environment);
}
/* if nothing, return an empty Properties object - if this is the case, then whatever bundle is relying on these
* properties is going to fail and we'll see it in the logs
*/
return new Properties();
}
The issue:
I have a test class (extending CamelBlueprintTestSupport) and there are a lot of moving parts such that I can't really change the order of things. Unfortunately, the properties bean method gets called before the CamelContext is started. Not that big a deal because in the test environment there is no config file to read the necessary properties from so the retrieval fails and we get back an empty properties object [note: we're overriding the properties component with fakes since it's not that class being tested], but in a perfect world, I'd like to be able to do two things:
1) replace the service with a new Impl()
2) intercept calls to the getProperties method OR tie the bean to the new service so that the calls return the properties from the fake impl
Thoughts?
Edit #1:
Here's one of the things I'm doing as a workaround right now:
try {
ServiceReference sr = this.getBundleContext().getServiceReference(FetchingService.class);
if(sr != null) {
((FetchingServiceImpl)this.getBundleContext().getService(sr)).setEnvironment(env);
((FetchingServiceImpl)this.getBundleContext().getService(sr)).setPath(path);
}
} catch(Exception e) {
log.error("Error getting Fetching service: {}", e.getMessage());
}
The biggest problem here is that I have to wait until the createCamelContext is called for a BundleContext to exist; therefore, the getProperties call has already happened once. As I said, since in the testing environment no config for the FetchingService class exists to provide the environment and path strings, that first call will fail (resulting in an empty Properties object). The second time around, the code above has set the properties in the impl class and we're off to the races. This is not a question about something that isn't working. Rather, it is about a better, more elegant solution that can be applied in other scenarios.
Oh, and for clarification before anyone asks, the point of this service is so that we don't have to have a .cfg file for every OSGI bundle deployed to our Servicemix instance - this central service will go and fetch the configs that the other bundles need and the only .cfg file that need exist is for the Fetcher.
Other pertinent details:
Camel 2.13.2 - wish it was 2.14 because they've added more property-placeholder tools to that version that would probably make this easier
Servicemix - 5.3.1
Have you tried overriding CamelBlueprintTestSupport's addServicesOnStartup in your test (see "Adding services on startup" http://camel.apache.org/blueprint-testing.html)?
In your case something like:
#Override
protected void addServicesOnStartup(Map<String, KeyValueHolder<Object, Dictionary>> services) {
services.put(FetchingService.class.getName(), asService(new FetchServiceImpl(), null));
}
I have an arquillian unit test that is writing a Note and passing the unit tests.
Now I would like to actually view what is being persisted into my SQLServer database.
When I open up SQLServer, I see my "Note" table, with all of the requisite columns...but there's no data.
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0"
xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="noteUnit"
transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/jdbc/datasources/notes</jta-data-source>
<properties>
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.dialect" value="org.hibernate.dialect.SQLServer2005Dialect" />
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.JBossTransactionManagerLookup" />
<property name="hibernate.hbm2ddl.auto" value="create"/>
</properties>
</persistence-unit>
</persistence>
I've tried various values for hbm22ddl.auto--'create-drop', 'update','validate', but since my test passes, I assume that the new rows are being inserted and then immediately removed by arquillian after the unit test?
Unit test below passes--meaning arquillian's xml file and all the other assorted plumbing appears to be set up correctly. Is there a setting somewhere to save all the data that's being inserted?
private NoteEntity createNote(){
NoteEntity note = new NoteEntity();
note.setGuid("123456789");
note.setAuthorId("12345");
return note;
}
#Test
public void createNoteTest(){
NoteEntity note1 = createNote();
mEntityManager.persist(note1);
Assert.assertNotNull(note1.getId());
}
Generally, jUnit are configured so that their transaction is in defaultrollback=true mode. This is done to avoid inserting test data in your database. You will probably find the configuration over your class definition or in an extended class.
Example for jUnit with Spring IOC configuration :
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = "classpath*:myPath/spring*Context.xml")
#TransactionConfiguration(defaultRollback = true)
#Transactional
public abstract class AbstactSpringTestCase {
...
}
I have the following JPA code, with all the values checked (ticket contains a valid bean, it ends without exception, etc.) It is executed, it does not throw any exceptions, yet in the end no data is written into the table.
I tried also retrieving a bean from the table, it also "works" (it is empty, so no data is returned).
The setup is
JBoss 6.1 Final
SQLServer 2008 Express (driver SQL JDBC 3 from MS)
The persistence code:
public String saveTicket() {
System.out.println("Controller saveTicket() ");
EntityManagerFactory factory = Persistence.createEntityManagerFactory("GesMan"); /* I know it would be better to share a single instance of factory, this is just for testing */
EntityManager entityMan = factory.createEntityManager();
entityMan.persist(this.ticket);
entityMan.close();
}
The persistence unit is
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="GesMan" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/GesManDS</jta-data-source>
<class>es.caib.gesma.gesman.Ticket</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.SQLServerDialect"/>
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.JBossTransactionManagerLookup"/>
<property name="hibernate.show_sql" value="true"/>
</properties>
</persistence-unit>
</persistence>
The datasource
<datasources>
<local-tx-datasource>
<jndi-name>GesManDS</jndi-name>
<connection-url>jdbc:sqlserver://spsigeswnt14.caib.es:1433;DatabaseName=TEST_GESMAN</connection-url>
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<user-name>thisis</user-name>
<password>notthepassword</password>
<check-valid-connection-sql>SELECT * FROM dbo.Ticket</check-valid-connection-sql>
<metadata>
<type-mapping>MS SQLSERVER</type-mapping>
</metadata>
</local-tx-datasource>
</datasources>
call entityMan.flush() or transaction.commit() befor closing it, otherwise it will discard all changes queued on close.
In the end it looks like I was using the wrong approach.... In JBoss you can`t (better said, I could not get to) access JPA directly (as you would do in JSE).
I ended creating an EJB (with transactions) and passing all JPA logic there.
PS: Of course, if I am wrong please tell me (now it is more of an academic issue, but still I want to know)