enable MyBatis SQL logs in WebSphere Application Server, log4j - ibatis

We are using mybatis 3, I am want to see the SQL logs but couldn't find how to enable it. I am using log4j in my application.
I followed this mybatis documentation - http://mybatis.github.io/mybatis-3/logging.html , but when I run the application I get the below exception. Am I missing something,
Cause: org.apache.ibatis.builder.BuilderException: Error parsing SQL
Mapper Configuration. Cause:
org.apache.ibatis.builder.BuilderException: The setting logImpl is not
known. Make sure you spelled it correctly (case sensitive).
Have given this setting in mybatis configuration file under configuration
<settings>
<setting name="logImpl" value="LOG4J"/>
</settings>

My situation was the same: I was using mybatis 3.1, and received same error. Is seems like logImpl property was added in later versions (3.2).
Experimentally I've found out, that mybatis was trying to use slf4j for logging, while I want to use log4j.
For me the fix was to add dependency on slf4j-log4j bridge library( I'm using: log4j 1.2.17, slf4j-log4j12 1.7.5). So, the workaround is not to set logger for mybatis, but to set another implementation for default logging interface (slf4j-api).

Related

Apache Camel with Kafka Schema registry

I am building a Camel application to read message from Confluent Kafka. The messages are in Avro format and added below route configuration to read the Avro messages using schema registry in Camel route. When I enable the valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer,
I am not getting any messages from Kafka topic. I tested the route with out schema registry and able to consume the message.
Route definition:
from("kafka:topic1?sslTruststoreLocation=<jks file>
&valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
&brokers=host1:9092,host2:9092,host3:9092
&sslKeystoreType=JKS
&groupId=grp1
&allowManualCommit=true
&consumersCount=10
&sslKeyPassword=<password>
&autoOffsetReset=earliest
&sslKeystorePassword=<password>
&securityProtocol=SSL
&sslTruststorePassword=<password>
&sslEndpointAlgorithm=HTTPS
&maxPollRecords=10
&sslTruststoreType=JKS
&sslKeystoreLocation=<keystore_path>
&autoCommitEnable=false
&additionalProperties.schema.registry.url=https://localhost:8081
&additionalProperties.basic.auth.user.info=abc:xyz
&additionalProperties.basic.auth.credentials.source=USER_INFO");
Can you please let me know, what is wrong in above configuration for schema registry. I also tried with EndPointRouteBuilder and same issue. However the producer application which is also Camel based and uses the schema registry for publishing Avro messages is working fine.
I figured out the way to configure the basic auth with Confluent schema registry. We need to configure as below
from("kafka:topic1?sslTruststoreLocation=<jks file>
&valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
&brokers=host1:9092,host2:9092,host3:9092
&sslKeystoreType=JKS
&groupId=grp1
&allowManualCommit=true
&consumersCount=10
&sslKeyPassword=<password>
&autoOffsetReset=earliest
&sslKeystorePassword=<password>
&securityProtocol=SSL
&sslTruststorePassword=<password>
&sslEndpointAlgorithm=HTTPS
&maxPollRecords=10
&sslTruststoreType=JKS
&sslKeystoreLocation=<keystore_path>
&autoCommitEnable=false
&additionalProperties.schema.registry.url=https://localhost:8081
&additional-properties[basic.auth.user.info]=abc:xyz
&additional-properties[basic.auth.credentials.source]=USER_INFO");
Note here, we need to use additional-properties for basic.auth.user.info and basic.auth.credentials.source as mentioned above.
My issue was that the schema registry password contained special characters, such +.
So I had to wrap the property in RAW as described in the documentation [1]
Given the above example, it would then result in:
&additional-properties[basic.auth.user.info]=RAW(abc:xyz+)
[1] https://camel.apache.org/manual/faq/how-do-i-configure-endpoints.html#HowdoIconfigureendpoints-Configuringparametervaluesusingrawvalues

how to do snowfalkes DB JNDI connection in Websphere Liberty application server

Is there a way to configure snowflakes connection pooling in websphere application serve.
I tried below config inside server.xml file. But not working.
<dataSource id="SnowflakeDataSource" jndiName="jdbc/BM_SF" type="javax.sql.DataSource">
<properties db="abcd" schema="_TARGET" URL="jdbc:snowflake://adpdc_cdl.us-east-1.privatelink.snowflakecomputing.com" user="****" password="****" />
<jdbcDriver libraryRef="DatacloudLibs" javax.sql.DataSource="net.snowflake.client.jdbc.SnowflakeBasicDataSource"/>
</dataSource>
To clarify, the configuration that you have configures WebSphere Application Server Liberty's connection pooling for a Snowflake data source, rather than Snowflake's connection pooling.
The configuration that you have looks mostly pretty good.
When I looked up the SnowflakeBasicDataSource class that you are using, I can see that it has a property called "databaseName", not "db", so you'll need to switch that in your configuration.
You will also need to configure one of the jdbc-4.x features in Liberty if you haven't already, and if you plan to look it up in JNDI (vs inject it), you'll need the jndi-1.0 feature.
Here is an example with some corrections:
<featureManager>
<feature>jdbc-4.2</feature>
<feature>jndi-1.0</feature>
... your other features here
</featureManager>
<dataSource id="SnowflakeDataSource" jndiName="jdbc/BM_SF" type="javax.sql.DataSource">
<properties databaseName="abcd" schema="_TARGET" URL="jdbc:snowflake://adpdc_cdl.us-east-1.privatelink.snowflakecomputing.com" user="****" password="****" />
<jdbcDriver libraryRef="DatacloudLibs" javax.sql.DataSource="net.snowflake.client.jdbc.SnowflakeBasicDataSource"/>
</dataSource>
If this still doesn't work, look into your definition of the DatacloudLibs library to ensure that it is properly pointing at the Snowflake JDBC driver, and if it still doesn't work, post the error message that you see in case it helps to determine the cause.

How to configure Karaf org.ops4j.pax.logging.cfg to use a sift appender based on the log4j2 logging category

I'm using Karaf and Camel and have been able to configure PAX logging to sift on MDC fields (camel.routeId) and that works just fine.
I'm wondering if I can configure log4j2 to sift on the logging category field (%c or %logger in log4j2 conversion pattern terms) or if anyone can point me in the right direction as to how I could go about configuring it.
Log4j2 (pax-logging-log4j2) is "sifting" on the basis of MDC data. By default logger/category is not part of this context data. You can however put the logger name to MDC yourself.
In pax-logging-log4j2, org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl#setDelegateContext() method sets 3 keys:
bundle.id
bundle.name
bundle.version
Camel sets own keys (like context-id) in org.apache.camel.impl.MDCUnitOfWork constructor.

Carrot2 dcs webapp setup

I have been struggling with setting up Carrot2 for use PHP, on a local machine. The plan is to have Carrot2 retrieve cluster from Solr populated by Nutch. Currently Solr and Nutch are correctly configured and I have been able to access the information via Carrot2 Workbench. Carrot2-dcs-3.10.0 has been set up what I believed to be correctly deployed through the tomcat6 manager although the documentation on setting this up is horrible vague and incomplete. Changes to source-solr-attributes.xml were made according to https://sites.google.com/site/profileswapnilkulkarni/tech-talk/howtoconfigureandruncarrot2webapplicationwithsolrdocumentsource . Tomcat is set up on port 8080. The Carrot2 DCS php example example.php works and displays the test output correctly. Although, when I try to perform a cluster using localIPAddress:8080/carrot2-dcs/index.html I run into a problem. When I set document source to Solr and the query to : then click cluster I get the following error message.
HTTP Status 500 - Could not perform processing: org.apache.http.conn.HttpHostConnectException: Connection to localhost:8983 refused
type Status report
message Could not perform processing: org.apache.http.conn.HttpHostConnectException: Connection to localhost:8983 refused
description The server encountered an internal error that prevented it from fulfilling this request.
I have searched everywhere in the deployed webapp folder for carrot2 and can't find where it is getting localhost:8983 from.
Any assistance would be appreciated, thank you.
It turns out that the source-solr-attributes.xml file had an extra overridden-attributes. one was before the default block comment with the example parameters and the second was added in by me with the parameters needed for my config. Deleting one of the line so there was only one corrected the problem. Apparently with two of those it ignores the server settings and uses default values instead.

How do you debug CXF endpoint publishing?

Given the "cxf-osgi" example from fuse source's apache-servicemix-4.4.1-fuse-00-08, built with maven 3.0.3, when deploying it to apache karaf 2.2.4 and CXF 2.4.3 the web service is never published and never visible to the CXF servlet (http://localhost:8181/cxf/). There are no errors in the karaf log. How would one go about debugging such behavior?
It's worth turning up the log level(s) - you can do this permanently in the etc/org.ops4j.pax.logging.cfg or in the console with log:set TRACE org.apache.cxf - IIRC this will show some useful information.
Also check that it's actually published on localhost/127.0.0.1 - it may well be being published on another interface, the IP of the local network but not localhost. Try using 0.0.0.0 as the the address, that way it will bind to all available interfaces.
As you're using maven, you can download the CXF source (easily in Eclipse) and connect a remote debugger to the Karaf instance, with some strategically placed breakpoints you should be able to get a handle on what's going on.
Try changing to Equinox instead of the default of Felix. There is a bug in 2.4.3 in that it doesn't work well with Felix. Alternatively, CXF 2.4.4 is now available that should also fix it.
Take a look at this issue I filed this week: https://issues.apache.org/jira/browse/CXF-4058
What I found is that if my beans.xml is loaded before the cxf bundle jar, then the endpoints are registered with CXF but not with the OSGi http service. So everything looks good from the logs but the endpoints are never accessible. This is a race condition.
I did two workarounds: 1) in the short term, just move my own jars later in the boot order (I use Karaf features) so Spring and CXF are fully loaded before my beans.xml is read and 2) abandon Spring and roll my own binding code based loosely on this approach: http://eclipsesource.com/blogs/2012/01/23/an-osgi-jax-rs-connector-part-1-publishing-rest-services/
I just implemented solution #2 yesterday and I'm already extremely happy with it. It's solved all of my classloader issues (before I had to manually add a lot of Import-Package lines because BND doesn't see beans.xml references) and fixed my boot race condition.

Resources