Logback dbAppender Custom SQL - database

Is there a way to change the tables that logback writes its data to using the dbAppender, It has three default tables that must be created before using dbAppender, but I want to customise it to write to one table of my choosing. Something similar to Log4J where I can specify the SQL that gets executed when inserting the log to the database.

Tomasz, maybe I'm missing something but I don't see how just using custom DBNameResolver could be the answer to what Magezy asked. DBNameResolver is used by DBAppender via SQLBuilder to construct 3 SQL insert querys - via DBNameResolve one can only affect names of tables and columns where data will be inserted, but can not limit inserting to just one table, not to mention that by just implementing DBNameResolver there are no means to control what actually gets inserted.
To match log4j's JDBCAppender IMO one has to extend logback's DBAppender, or DBAppenderBase, or maybe even implement completely new custom Appender.

The easiest way for me was to make an appender from scratch. I'm appending to a single table, using Spring JDBC. It works something like this:
public class MyAppender extends AppenderBase<ILoggingEvent>
{
private String _jndiLocation;
private JDBCTemplate _jt;
public void setJndiLocation(String jndiLocation)
{
_jndiLocation = jndiLocation;
}
#Override
public void start()
{
super.start();
if (_jndiLocation == null)
{
throw new IllegalStateException("Must have the JNDI location");
}
DataSource ds;
Context ctx;
try
{
ctx = new InitialContext();
Object obj = ctx.lookup(_jndiLocation);
ds= (DataSource) obj;
if (ds == null)
{
throw new IllegalStateException("Failed to obtain data source");
}
_jt = new JDBCTemplate(ds);
}
catch (Exception ex)
{
throw new IllegalStateException("Unable to obtain data source", ex);
}
}
#Override
protected void append(ILoggingEvent e)
{
// log to database here using my JDBCTemplate instance
}
}
I ran into trouble with SLF4J - the substitute logger error described here:
http://www.slf4j.org/codes.html#substituteLogger
This thread on multi-step configuration enabled me to work around that issue.

You need to implement ch.qos.logback.classic.db.names.DBNameResolver and use it in the configuration:
<appender name="DB" class="ch.qos.logback.classic.db.DBAppender">
<dbNameResolver class="com.example.MyDBNameResolver"/>
<!-- ... -->
</appender>

<appender name="CUSTOM_DB_APPENDER" class="com.....MyDbAppender">
<filter class="com......MyFilter"/>
<param name="jndiLocation" value="java:/comp/env/jdbc/....MyPath"/>
</appender>
And your java MyDbAppender should have a string jndiLocation with setter.
Now do a jndi lookup (see the solution answered in Oct 17 '11 at 16:03)

Related

Spring Batch "Invalid object name BATCH_JOB_INSTANCE"

I've created a spring batch to query a Azure SQL server database and write the data into a CSV file. I do not have create permissions for the database. I get this error Invalid Object name BATCH_JOB_INSTANCE on running the batch. I don't want the spring batch meta-data tables to be created in the main database. Or it would be helpful if I can have them in another local or in-memory db like h2db.
I've also added spring-batch-initialize-schema=never already, which was the case with most answers to similar questions on here, but that didn't help.
Edit:
I resolved the Invalid Object name error by preventing the metadata tables from being created into the main database by extending the DefaultBatchConfigurer Class and Overriding the setDataSource method, thus having them created in the in-memory map-repository. Now I want to try two options:
How to have the meta data tables to be created in a local db or in-memory db like h2db.
Or If I have the meta data tables created already in the main database, in a different schema than my main table I'm fetching from. How to point my job to those meta-data tables in another schema, to store the job and step details data in those.
#Configuration
public class SpringBatchConfig extends DefaultBatchConfigurer{
#Override
public void setDataSource(DataSource datasource) {
}
...
My application.properties file looks like this:
spring.datasource.url=
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver-class-name=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring-batch-initialize-schema=never
spring.batch.job.enabled=false
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
I've created a demo with two datasources. Batch metadata will sotre in H2 DB and the Job datasource is Azure SQL.
Here is the project structure:
We need define a DataSourceConfig class and use #Primary annotation for DataSource bean:
#Configuration
public class DataSourceConfig {
#Bean(name = "mssqlDataSource")
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource appDataSource(){
return DataSourceBuilder.create().build();
}
#Bean(name = "h2DataSource")
#Primary
// #ConfigurationProperties(prefix="spring.datasource.h2")
public DataSource h2DataSource() {
return DataSourceBuilder.create()
.url("jdbc:h2:mem:thing:H2;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.driverClassName("org.h2.Driver")
.username("sa")
.password("")
.build();
}
}
In the ItemReaderDbDemo class, we use #Autowired #Qualifier("mssqlDataSource") to specify the dataSource in the Spring Batch task:
#Configuration
public class ItemReaderDbDemo {
//generate task Object
#Autowired
private JobBuilderFactory jobBuilderFactory;
//Step exec tasks
//generate step Object
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier("mssqlDataSource")
private DataSource dataSource;
#Autowired
#Qualifier("dbJdbcWriter")
private ItemWriter<? super Todo> dbJdbcWriter;
#Bean
public Job itemReaderDbDemoJob() {
return jobBuilderFactory.get("itemReaderDbDemoJob").start(itemReaderDbStep()).build();
}
#Bean
public Step itemReaderDbStep() {
return stepBuilderFactory.get("itemReaderDbStep")
.<Todo,Todo>chunk(2)
.reader(dbJdbcReader())
.writer(dbJdbcWriter)
.build();
}
#Bean
#StepScope
public JdbcPagingItemReader<Todo> dbJdbcReader() {
JdbcPagingItemReader<Todo> reader = new JdbcPagingItemReader<Todo>();
reader.setDataSource(dataSource);
reader.setFetchSize(2);
reader.setRowMapper(new RowMapper<Todo>() {
#Override
public Todo mapRow(ResultSet rs, int rowNum) throws SQLException {
Todo todo = new Todo();
todo.setId(rs.getLong(1));
todo.setDescription(rs.getString(2));
todo.setDetails(rs.getString(3));
return todo;
}
});
SqlServerPagingQueryProvider provider = new SqlServerPagingQueryProvider();
provider.setSelectClause("id,description,details");
provider.setFromClause("from dbo.todo");
//sort
Map<String,Order> sort = new HashMap<>(1);
sort.put("id", Order.DESCENDING);
provider.setSortKeys(sort);
reader.setQueryProvider(provider);
return reader;
}
}
Here is my application.properties:
logging.level.org.springframework.jdbc.core=DEBUG
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.datasource.jdbcUrl=jdbc:sqlserver://josephserver2.database.windows.net:1433;database=<Your-Database-Name>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
spring.datasource.username=<Your-UserName>
spring.datasource.password=<Your-Password>
spring.datasource.initialization-mode=always
It return expected result from my Azure SQL. By the way, my Azure sql username does not have create permissions for the database.
The result shows:
How to have the meta data tables to be created in a local db or in-memory db like h2db.
You can use spring.batch.initialize-schema=embedded for that.
Or If I have the meta data tables created already in the main database, in a different schema than my main table I'm fetching from. How to point my job to those meta-data tables in another schema, to store the job and step details data in those.
spring batch works against a datasource, not a particular schema. If meta-data tables are in a different schema, then you need to create a second datasource pointing to that schema and set it on the job repository.
I know this post is a little bit old, but I'd like to give an update.
For newer versions of Spring Boot spring.batch.initialize-schema is deprecated.
I'm using Spring Boot 2.7.1 and the newer property is spring.batch.jdbc.initialize-schema.
In my case: when I was receiving the error message was due that the user did not have the CREATE TABLE permission to create the corresponding spring bacth tables.
Adding the permissions fix the issue.

How to intercept and modify SQL of PreparedStatement with Spring?

Is there a central location in the JDBCTemplate (or related) where SQL manipulations can be performed immediately before they are sent to the DB?
I want to prepend a comment-line to each and every SQL statement that gets issued to the RDBMS.
Hope there is a dedicated extension point. Otherwise, I would need to write my own class that inherits from JDBCTemplate and adds my custom logic, which I want to avoid.
Is there a central location in the JDBCTemplate (or related) where SQL manipulations can be performed immediately before they are sent to the DB?
Yes, the DataSource.
With datasource-proxy you can intercept queries on DataSource level using a custom QueryTransformer:
private static class MyQueryTransformer implements QueryTransformer {
#Override
public String transformQuery(TransformInfo transformInfo) {
String query = transformInfo.getQuery();
// transform query
return query;
}
}
and supplying it into ProxyDataSourceBuilder:
ProxyDataSourceBuilder.create()
...
.queryTransformer(new MyQueryTransformer())
...
See also datasource-proxy-examples

Output data from database in JSF page

I am making a project using JSF, and I know how to get data from my view. I also know how to get data with the JDBC connector. And also how to put data in the view, from some objects, but my question is:
How to put data directly from my database, for example a list of person, in JSF, for example with the tag <h:outputText value="#{}"/> ?
I have found some examples with instantiate objects, but I did not found a real example with data from a DB.
JSF is just an MVC framework to develop web applications in Java. JSF doesn't associate with any data source at all. The only data JSF will use is retrieved from:
The data already stored in the proper object as attribute: HttpServletRequest, HttpSession or ServletContext.
The request/view/session/application context in form of fields in the managed beans, recognized by classes decorated as #ManagedBeans or #Named if using CDI. The data of these fields will be stored as attributes in the objects mentioned in the section above, depending on the scope of the managed bean.
By knowing this, then the only thing you should worry about is to fill the fields in your managed beans. You can fill them with incoming data from database, from a web service or whatever data source you have in mind.
For example, if you want/need to populate your data to pre process a request, you can do the following:
#ManagedBean
#ViewScoped
public class SomeBean {
List<Entity> entityList;
#PostConstruct
public void init() {
SomeService someService = new SomeService();
entityList = someService.findEntityList();
}
//getters and setters for the list...
}
//as you can see, this class is just pure Java
//you may use other frameworks if you want/need
public class SomeService {
public List<Entity> findEntityList() {
String sql = "SELECT field1, field2... FROM table";
List<Entity> entityList = new ArrayList<>();
try (Connection con = ...; //retrieve your connection somehow
PreparedStatement pstmt = con.prepareStatement(sql)) {
ResultSet rs = pstmt.executeQuery();
while (rs.next()) {
Entity entity = new Entity();
entity.setField1(rs.getString("field1"));
entity.setField2(rs.getString("field2"));
//...
entityList.add(entity);
}
} catch (Exception e) {
//handle exception ...
e.printStackTrace();
}
return entityList;
}
}

Timeout waiting for connection from pool - despite single SolrServer

We are having problems with our solrServer client's connection pool running out of connections in no time, even when using a pool of several hundred (we've tried 1024, just for good measure).
From what I've read, the following exception can be caused by not using a singleton HttpSolrServer object. However, see our XML config below, as well:
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:232)
at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
XML Config:
<solr:solr-server id="solrServer" url="http://solr.url.domain/"/>
<solr:repositories base-package="de.ourpackage.data.solr" multicore-support="true"/>
At this point, we are at a loss. We are running a web application on a tomcat7. Whenever a user requests a new website, we send one or more request to the Solr Server, requesting whatever we need, which are usually single entries or page of 20 (using Spring Data).
As for the rest of our implementation, we are using an abstract SolrOperationsrepository class, which is extended by each of our repositories (one repository for each core).
The following is how we set our solrServer. I suspect we are doing something fundamentally wrong here, which is why our connections are overflowing. According to the logs, they are always being returned into the pool, btw.
private SolrOperations solrOperations;
#SuppressWarnings("unchecked")
public final Class<T> getEntityClass() {
return (Class<T>)((ParameterizedType)getClass().getGenericSuperclass()).getActualTypeArguments()[0];
}
public final SolrOperations getSolrOperations() {
/*HttpSolrServer solrServer = (HttpSolrServer)solrOperations.getSolrServer();
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
logger.info("solrOperations: " + solrOperations);
return solrOperations;
}
#Autowired
public final void setSolrServer(SolrServer solrServer) {
try {
String core = SolrServerUtils.resolveSolrCoreName(getEntityClass());
SolrTemplate template = templateHolder.get(core);
/*solrServer.setConnectionTimeout(500);
solrServer.setMaxTotalConnections(2048);
solrServer.setDefaultMaxConnectionsPerHost(2048);
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
if ( template == null ) {
template = new SolrTemplate(new MulticoreSolrServerFactory(solrServer));
template.setSolrCore(core);
template.afterPropertiesSet();
logger.debug("Creating new SolrTemplate for core '" + core + "'");
templateHolder.put(core, template);
}
logger.debug("setting SolrServer " + template);
this.solrOperations = template;
} catch (Exception e) {
logger.error("cannot set solrServer...", e);
}
}
The code that is commented out has been mostly used for testing purposes. I also read somewhere else that you cannot manipulate the solrServer object on-the-fly. Which begs the question, how do I set a timeout/poolsize in the XML config?
The implementation of a repository looks like this:
#Repository(value="stellenanzeigenSolrRepository")
public class StellenanzeigenSolrRepositoryImpl extends SolrOperationsRepository<Stellenanzeige> implements StellenanzeigenSolrRepositoryCustom {
...
public Query createQuery(Criteria criteria, Sort sort, Pageable pageable) {
Query resultQuery = new SimpleQuery(criteria);
if ( pageable != null ) resultQuery.setPageRequest(pageable);
if ( sort != null ) resultQuery.addSort(sort);
return resultQuery;
}
public Page<Stellenanzeige> findBySearchtext(String searchtext, Pageable pageable) {
Criteria searchtextCriteria = createSearchtextCriteria(searchtext);
Query query = createQuery(searchtextCriteria, null, pageable);
return getSolrOperations().queryForPage(query, getEntityClass());
}
...
}
Can any of you point to mistakes that we've made, that could possibly lead to this issue? Like I said, we are at a loss. Thanks in advance, and I will, of course update the question as we make progress or you request more information.
The MulticoreServerFactory always returns an object of HttpClient, that only ever allows 2 concurrent connections to the same host, thus causing the above problem.
This seems to be a bug with spring-data-solr that can be worked around by creating a custom factory and overriding a few methods.
Edit: The clone method in MultiCoreSolrServerFactory is broken. This hasn't been corrected yet. As some of my colleagues have run into this issue recently, I will post a workaround here - create your own class and override one method.
public class CustomMulticoreSolrServerFactory extends MulticoreSolrServerFactory {
public CustomMulticoreSolrServerFactory(final SolrServer solrServer) {
super(solrServer);
}
#Override
protected SolrServer createServerForCore(final SolrServer reference, final String core) {
// There is a bug in the original SolrServerUtils.cloneHttpSolrServer()
// method
// that doesn't clone the ConnectionManager and always returns the
// default
// PoolingClientConnectionManager with a maximum of 2 connections per
// host
if (StringUtils.hasText(core) && reference instanceof HttpSolrServer) {
HttpClient client = ((HttpSolrServer) reference).getHttpClient();
String baseURL = ((HttpSolrServer) reference).getBaseURL();
baseURL = SolrServerUtils.appendCoreToBaseUrl(baseURL, core);
return new HttpSolrServer(baseURL, client);
}
return reference;
}
}

Initialize Spring embedded database after deployment

I have an Spring MVC app with an embedded database (HSQLDB) that I want to initialize after deployment. I know that I could use an xml script to define initial data for my datasource but, as long I'm using JPA + Hibernate, I would like to use Java code. Is there a way to do this?
Heavily updated answer (it was too complex before):
All you need is to add initializing bean to your context, which will insert all the necessary data into the database:
public class MockDataPopulator {
private static boolean populated = false;
#Autowired
private SessionFactory sessionFactory;
#PostConstruct
public void populateDatabase() {
// Prevent duplicate initialization as HSQL is also initialized only once. Duplicate executions
// can happen when the application context is reloaded - e.g. when running unit tests).
if (populated) {
return;
}
// Create new persistence session
Session session = sessionFactory.openSession();
session.setFlushMode(FlushMode.ALWAYS);
// Insert mock entities
session.merge(MockDataFactory.createMyFirstObject())
session.merge(MockDataFactory.createMySeconfObject())
// ...
// Flush and close
session.flush();
session.close();
// Set initialization flag
populated = true;
}
}

Resources