I am a bit confused about working with Blueprint camel and Apache Karaf. In fact, when I was developping my route, I was using this to connect to my mssql database :
<bean id="dbcp" destroy-method="close" class="org.apache.commons.dbcp2.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
<property name="url" value="jdbc:sqlserver://server\instance;databaseName=xxx;" />
<property name="username" value="xxxx" />
<property name="password" value="xxx" />
</bean>
This was working flawlessly and then I wanted to export it into Apache Karaf. I did so and ran into a lot of trouble because of the sqlserver driver not being found. So I tried to handle this another way by exposing the DataSource as a service on Apache Karaf. This works and I get a hold of the reference like so in my blueprint:
<reference id="dbcp" interface="javax.sql.DataSource" filter="(osgi.jndi.service.name=Name)" availability="mandatory" />
Now this works but I don't exactly know what this does behind the scenes. I've read about services and references and often to make my first example work, people use a service call and then use it in the bean.
Is there a right and a wrong way? I've read on top of that, that we should a connection pool but I have only seen an example of this in the first approach (1st code sample). I guess it does the same when done with the DataSource as a service since I can call it from multiple bundles.
Thanks for regarding, best regards
In Apache Karaf versions 4.2.x - 4.4.x it's generally a good practice to use OSGi Services to share DataSource type objects. This makes your bundles more loosely coupled and when making changes to connection parameters you'll only need to change them for the service instead of having to reconfigure every bundle that uses the said DataSource.
You can also create your own shared resources and expose them as services using blueprints, declarative service annotations or the "hard way" using activator and bundle context.
I also recommend to checkout features pax-jdbc-config and pax-jms-config features as they allow you to create DataSource and ConnectionFactory type services from config files. These look for config files using org.ops4j.datasource org.ops4j.datasource prefixes in their name e.g org.ops4j.datasource-Example.cfg
Only downside for using services is that they're specific to Karaf and OSGi so if you ever need to move your integrations to non-osgi environment you'll have to figure out another way to inject data sources to your integrations.
[edit]
With shared resources I mean resources you might want to access from multiple bundles. These can be anything from objects that contain shared data, connection objects for cloud blob storages, data access objects, slack or discord bots, services for sending mails etc.
You can publish new services using blueprints from beans using service tag. Below is example from OSGi R7 Specification
<blueprint>
<service id="echoService"
interface="com.acme.Echo" ref="echo"/>
<bean id="echo" class="com.acme.EchoImpl">
<property name="message" value="Echo: "/>
</bean>
</blueprint>
public interface Echo {
public String echo(String m);
}
public class EchoImpl implements Echo {
String message;
public void setMessage(String m) {
this.message= m;
}
public void echo(String s) { return message + s; }
}
With OSGi Declarative services (DS) / Service Component Runtime (SCR) annotations you can publish new services with java.
#Component
public class EchoImpl implements Echo {
String message;
public void setMessage(String m) {
this.message= m;
}
public void echo(String s) { return message + s; }
}
Another example can be found from Official Karaf examples in github.
I just created a batch job using Spring Batch framework, but I don't have Database privileges to run CREATE SQL. When I try to run the batch job I hit the error while the framework tried to create TABLE_BATCH_INSTANCE. I try to disable the
<jdbc:initialize-database data-source="dataSource" enabled="false">
...
</jdbc:initialize-database>
But after I tried I still hit the error
org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY = ?]; nested exception is java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
Anyway can disable the SQL, I just want to test my reader writer and processor work properly.
UPDATE:
As of spring 2.5.0, you should use spring.batch.jdbc.initialize-schema instead. See source.
With Spring Boot 2.0 you probably need this:
https://docs.spring.io/spring-boot/docs/2.0.0.M7/reference/htmlsingle/#howto-initialize-a-spring-batch-database
spring.batch.initialize-schema=always
By default it will only create the tables if you are using an embedded database.
Or
spring.batch.initialize-schema=never
To permanently disable it.
Spring Batch uses the database to save metadata for its recover/retry functionality.
If you can't create tables in the database then you have to disable this behaviour
If you can create the batch metadata tables but not in runtime then you might create them manually
Spring Batch required following tables to run job
BATCH_JOB_EXECUTION
BATCH_JOB_EXECUTION_CONTEXT
BATCH_JOB_EXECUTION_PARAMS
BATCH_JOB_EXECUTION_SEQ
BATCH_JOB_INSTANCE
BATCH_JOB_SEQ
BATCH_STEP_EXECUTION
BATCH_STEP_EXECUTION_CONTEXT
BATCH_STEP_EXECUTION_SEQ
If you are using h2 db then it will create all required table by default
spring.h2.console.enabled=true
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
When you start using Mysql or any other database you need to add follwing properties into application.properties
spring.batch.initialize-schema=always
Since Spring Boot 2.5.0 use
# to always initialize the datasource:
spring.batch.jdbc.initialize-schema=always
# to only initialize an embedded datasource:
spring.batch.jdbc.initialize-schema=embedded
# to never initialize the datasource:
spring.batch.jdbc.initialize-schema=never
(spring.batch.initialize-schema is deprecated since 2.5.0 for removal in 2.7.0)
To enable auto create spring batch data-schema simply add this line to your spring application.properties file :
spring.batch.initialize-schema=always
To understand more about Spring batch meta-data schema :
https://docs.spring.io/spring-batch/trunk/reference/html/metaDataSchema.html
Seems silly, but someone can have the same problem.
I was receiving this error after drop all tables from a database. When I tried to start the Spring Batch, I received the error:
bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY = ?]
and:
Invalid object name 'BATCH_JOB_INSTANCE'
This happened to me because I drop the tables without restart the service. The service was started and receive the database metadata with the Batch tables on the database. After drop them and not restart the server, the Spring Batch thought that the tables still exists.
After restart the Spring Batch server and execute the batch again, the tables were created without error.
When running with Spring Boot:
Running with Spring Boot v1.5.14.RELEASE, Spring v4.3.18.RELEASE
This should be enough:
spring:
batch:
initializer:
enabled: false
The initialize-schema did not work for this Spring boot version.
After that I was able to copy the SQL scripts from the spring-core jar and change the table capitalization since this was my issue with the automatic table creation under Windows/Mac/Linux.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/jdbc
http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd">
<!-- database -->
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/springbatch" />
<property name="username" value="root" />
<property name="password" value="" />
</bean>
<!-- transaction manager -->
<bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<!-- create job-meta tables automatically -->
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="org/springframework/batch/core/schema-drop-mysql.sql" />
<jdbc:script location="org/springframework/batch/core/schema-mysql.sql" />
</jdbc:initialize-database>
</beans>
And make sure you are using compatible spring-jdbc -version with spring-batch.
Most probably spring-jdbc-3.2.2.RELEASE.JAR compatible.
<jdbc:initialize-database/> tag is parsed by Spring using InitializeDatabaseBeanDefinitionParser. You can try debugging this class in your IDE to make sure what values are being picked up for enabled attribute. Also this value can be disabled by using JVM parameter -Dspring.batch.initializer.enabled=false
this works for me: Spring boot 2.0
batch:
initialize-schema: never
initializer:
enabled: false
Use the following setting as the suggested one above has been deprecated:
spring.batch.jdbc.initialize-schema=always
In your DataSourceConfig you should add this code. Once the app comes up it deletes any existing schema , or creates a new one .
private DataSource dataSource() {
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:org/springframework/batch/core/schema-drop-h2.sql")
.addScript("classpath:org/springframework/batch/core/schema-h2.sql")
.build();
}
If you are not able to do this , the schema scripts are present in your .m2 folder . Run the scripts manually.
Location is inside spring-batch-core jar.
C:\Users\XXX.m2\repository\org\springframework\batch\spring-batch-core\4.3.7.
Split open this jar and find sql schema script you want.
org\springframework\batch\core\schema-h2.sql.
org\springframework\batch\core\schema-drop-h2.sql.
I had previously used Spring MVC and hibernate annotations in my Google web application project. It is taking some time to start the application after deployment.
For that reason, I am switching to a Spring MVC XML-based approach for the controller only. However, for service and DAO classes, #Service and #Repository annotations remain as is.
In my Spring XML I am doing as like below (there is no bean tag defined for service and DAO classes):
<bean class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping" />
<bean class="org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter" />
<bean class="com.my.controller.UserController">
<property name="domainManager" ref="domainManager"/>
<property name="userProfileDao" ref="userProfileDao"/>
</bean>
Inside UserController, I am not using any #autowired annotation. I am using combination of annotations with XML. Are there any drawbacks of this approach? Am I going about this the wrong way?
The difference is not between using Annotation or XML, it's between Autowiring and "manually injecting beans".
EDIT: #Autowired and XML component scan are doing the same thing.
You can "manually inject" beans with both XML and full Java #Configuration, the equivalent of your example would be :
#Configuration
public class WebAppConfig {
#Bean
public UserDao userDao() {
return new UserDao();
}
#Bean
public UserController userController() {
UserController ctrl = new UserController();
ctrl.setUserDao(userDao());
return ctrl;
}
}
The question is quite accurate because the App Engine team itself has revealed that the App Engine runtime was bad at classpath scanning (which Autowiring does to find matches by Class).
The performance loss at instance startup time would occur if you were doing :
public class UserController {
#Autowired
private UserDao userDao;
// ...
}
See this video, especially the question from the Pivotal (Spring framework) contributor : http://www.youtube.com/watch?v=lFarE1hH0ss
Few people know about this issue. Using Spring AOP can even totally crash on the production runtime. See : Using Spring AOP on App Engine causes StackOverflowError
So about your use of XML, there is no "right or wrong". Personally I don't like writing XML since I feel like it's more error prone, but some people like to clearly separate their configuration from their code. I still use autowiring in production, since the startup time is not an issue for me. Do what you and your team feel comfortable with, just keep in in mind the GAE limitations.
Currently we have an application who use spring who support mysql.
Some people prefer to use Oracle.
So i search a way with spring to have an abstract factory with a factory for every database and each one have a dao.
How to put the glue between all this component?
How the component know the datasource who need to be used?
Is there some good pratice with spring to do this?
Not clear what exactly is your problem, but Spring profiles are answer to all of them. First you need to define two DataSources for each supported database:
<bean id="oracleDataSource" class="..." profile="oracle">
<!-- -->
</bean>
<bean id="mysqlDataSource" class="..." profile="mysql">
<!-- -->
</bean>
Note the profile attribute. Actually you will probably get away with simply parametrizing one data source yo use different JDBC URL and driver, but doesn't matter.
Now you define two versions of each DAO: one for Oracle and one for MySQL:
interface MonkeyDao {
//...
}
#Repository
#Profile("oracle")
class OracleMonkeyDao implements MonkeyDao {
//...
}
#Repository
#Profile("mysql")
class MySqlMonkeyDao implements MonkeyDao {
//...
}
As you can see you have two beans defined implementing the same interface. If you do it without profiles and then autowire them:
#Resource
private MonkeyDao monkeyDao;
Spring startup will fail due to unresolved dependency. But if you enable one of the profiles (either mysql or oracle) Spring will only instantiate and create bean for matching profile.
I am working on a GWT Application which requires connection with MySQL database. I can do it successfully for a servlet. However I require multiple "RemoteServiceServlets" to share a single Conection refrence as creating a new one everytime makes no sense.
How can I achive this?
Sharing a single JDBC connection in a servlet environment where multiple users are accessing it can have serious consequences: http://forums.oracle.com/forums/thread.jspa?threadID=554427
In a nutshell: a single connection represents a single DBMS user doing a single series
of queries and/or updates, with one transaction in-force at any given
moment.
So basically in a servlet environment you must use a JDBC connection pooling, where you get a new connection from a pool of reusable connections, but a single connection is only used by one servlet at a time. Here is an example implementation: http://java.sun.com/developer/onlineTraining/Programming/JDCBook/conpool.html
If you're willing to use Spring, i would suggest trying the approach described here http://pgt.de/2009/07/17/non-invasive-gwt-and-spring-integration-reloaded/ , which I used for some of my projects in GWT.
Add your spring context configuration to your web.xml file
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/application-context.xml</param-value>
</context-param>
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener
</listener-class>
</listener>
Define your datasource in the context file
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${jdbc.driverClassName}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
</bean>
Have your servlets extend
public class AutoinjectingRemoteServiceServlet extends RemoteServiceServlet {
#Override
public void init(ServletConfig config) throws ServletException {
super.init(config);
WebApplicationContext ctx = WebApplicationContextUtils.getRequiredWebApplicationContext(config.getServletContext());
AutowireCapableBeanFactory beanFactory = ctx.getAutowireCapableBeanFactory();
beanFactory.autowireBean(this);
}
}
And then use your datasource as a spring bean in all your servlets
public class MyServiceImpl extends AutoinjectingRemoteServiceServlet implements MyService {
#Autowired
private DataSource dataSource;
...