I am getting this error.
Stacktrace:] with root cause
INFO | jvm 1 | 2019/06/06 08:49:59 | java.lang.AbstractMethodError
INFO | jvm 1 | 2019/06/06 08:49:59 | at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jsp.startpage_jsp.getDBConnection(startpage_jsp.java:60)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jsp.startpage_jsp.getUserInfo(startpage_jsp.java:474)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jsp.startpage_jsp._jspService(startpage_jsp.java:732)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
INFO | jvm 1 | 2019/06/06 08:49:59 | at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:476)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:386)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:330)
INFO | jvm 1 | 2019/06/06 08:49:59 | at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.tomcat.ForgotPasswordFilter.doFilter(ForgotPasswordFilter.java:78)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.webrequestsigner.WebRequestSignerFilter.doFilter(WebRequestSignerFilter.java:67)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:610)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.authenticator.SingleSignOn.invoke(SingleSignOn.java:240)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.owasp.valve.WhitelistHTTPMethodsValve.invoke(WhitelistHTTPMethodsValve.java:72)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.owasp.valve.XSSProtectionHeaderValve.invoke(XSSProtectionHeaderValve.java:175)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.tomcat.AddHeaderValve.invoke(AddHeaderValve.java:117)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.tomcat.ClickjackHostValve.invoke(ClickjackHostValve.java:107)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:806)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
INFO | jvm 1 | 2019/06/06 08:49:59 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
INFO | jvm 1 | 2019/06/06 08:49:59 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
INFO | jvm 1 | 2019/06/06 08:49:59 | at java.lang.Thread.run(Thread.java:748)
INFO | jvm 1 | 2019/06/06 08:49:59 |
I found some solutions with adding validationQuery="select 1" to context.xml. But in my context.xml I don't have a resource.
<Context>
<!-- Authenticate against PlanonRealmLogin (JAAS) -->
<!-- allRolesMode=authOnly" means that no role is needed for '*' requirement -->
<Realm appName="PlanonRealmLogin"
className="org.apache.catalina.realm.JAASRealm"
userClassNames="nl.planon.cerebeus.PnUser"
roleClassNames="nl.planon.cerebeus.PnRole"
allRolesMode="authOnly"/>
<!--Valve className="nl.planon.tomcat.AccessKeyValve" throttle="5000"/-->
<!--Valve className="nl.planon.tomcat.ForgotPasswordLoginValve"/-->
<!-- Will force authentication attempts to be parsed as UTF-8. The Landingpage will prevent HTTP 408 messages
because now, even without a stored original location, Tomcat knows where to forward to. -->
<!-- exceptionAttributeName="PnLoginException"-->
<Valve className="nl.planon.tomcat.PnMessageFormAuthenticator" landingPage="/" characterEncoding="utf-8"/>
<!-- This valve excludes valid webdav users with role webdav_readwrite to enter the web application(s) -->
<!--Valve className="nl.planon.tomcat.ExcludingRoleValve"/-->
<!--Parameter name="trustedServiceKeystore" value="${catalina.home}/webclientKeystore.jks" />
<Parameter name="trustedServiceName" value="webclient" /-->
<Manager pathname="" />
<ResourceLink
name="jdbc/PlanonDS"
global="jdbc/PlanonDS"
type="javax.sql.DataSource" />
<!-- Whitelist the minimal set of HTTP Methods that Web Bootstrap needs -->
<!--Valve className="nl.planon.owasp.valve.WhitelistHTTPMethodsValve" methods="GET, OPTIONS, HEAD, POST, PUT, DELETE" /-->
In my server.xml I have mentioned resource.
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/>
<Resource name="jdbc/PlanonDS"
auth="Container"
type="javax.sql.DataSource"
username=""
password=""
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://SZH1DB;instance=planon"
validationQuery="select 1"
maxActive="8"
maxIdle="4"/>
Here I have used validationQuery="select 1". But still I get the same error.
Can someone help me with this error?
Related
I am a developer and looking for an advise on optimisation or maintenance of Postgres database.
I am currently investigating on commands which helps in clean up/defragmentation of DB and release some memory to filesystem as DB disk storage space is usage is growing quickly. I found that "VACUUM FULL" can help release memory used by dead tuples. However could not find information on how many or percentage of dead tuples should be there before we consider running this command.
Currently we have two tables in Nextcloud Postgres database which has dead tuples. Also total relation size for these tables is higher than the disk usage reported by \dt+ command. I am providing the stats below. Please advise if they are eligible for "VACUUM FULL" based on given stats.
###########################################
Disk space usage per table (\dt+ command)
###########################################
Schema | Name | Type | Owner | Size | Description
--------+-----------------------------+-------+----------+------------+-------------
public | oc_activity | table | XXXXXXXX | 4796 MB |
public | oc_filecache | table | XXXXXXXX | 127 MB |
#################################
oc_activity total relation size
#################################
SELECT pg_size_pretty( pg_total_relation_size('oc_activity') )
----------------
pg_size_pretty
----------------
9666 MB
########################################
Additional stats for oc_activity table
########################################
relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count
-------+------------+-------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------+-----------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------
yyyyy | public | oc_activity | 272 | 1046966870 | 4737 | 57914604 | 1548217 | 0 | 325585 | 0 | 11440511 | 940192 | 268430 | | | | 2023-02-15 10:01:36.657028+00 | 0 | 0 | 0 | 3
###################################
oc_filecache total relation size
###################################
SELECT pg_size_pretty( pg_total_relation_size('oc_filecache') )
----------------
pg_size_pretty
----------------
541 MB
#########################################
Additional stats for oc_filecache table
#########################################
SELECT * FROM pg_stat_all_tables WHERE relname='oc_filecache'
relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count
-------+------------+--------------+----------+--------------+------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------
zzzzz | public | oc_filecache | 104541 | 28525391484 | 1974398333 | 2003365293 | 43575 | 695612 | 39541 | 348823 | 461510 | 19418 | 4069 | | 2023-02-16 10:46:15.165442+00 | | 2023-02-16 16:25:32.568168+00 | 0 | 8 | 0 | 33
There is no hard rule. I personally would consider a table uncomfortably bloated if the pgstattuple extension showed that less than a third or a quarter of the table are user data and the rest is dead tuples and empty space.
Rather than regularly running VACUUM (FULL) (which is downtime), you should strive to fix the problem that causes the table bloat in the first place.
I created a database with two replicas, when I tried to write some data into the database, the vnode status is always syncing, how to make it sync faster?
taos> show vgroups;
vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting |
=================================================================================================================
2 | 1000 | ready | 2 | 3 | master | 2 | slave | 0 |
3 | 1000 | ready | 2 | 2 | master | 1 | slave | 0 |
4 | 1000 | ready | 2 | 1 | master | 3 | slave | 0 |
5 | 1000 | ready | 2 | 3 | master | 2 | slave | 0 |
6 | 1000 | ready | 2 | 2 | master | 1 | slave | 0 |
7 | 1000 | ready | 2 | 1 | master | 3 | slave | 0 |
8 | 1000 | ready | 2 | 3 | master | 2 | slave | 0 |
9 | 1000 | ready | 2 | 2 | master | 1 | slave | 0 |
10 | 1000 | ready | 2 | 1 | master | 3 | syncing | 0 |
11 | 1000 | ready | 2 | 3 | master | 2 | slave | 0 |
Query OK, 10 row(s) in set (0.004323s)
In vertica, Is there any way to access admintools without admin rights?. How to find nodes and cluster configure details in normal user in vertica?
You might have already tried it.
You must have access to the Linux shell level on one of the Vertica nodes, as a user that is allowed to write to /opt/vertica/log/adminTools.log . And that is, by default, dbadmin.
I would regard it as quite a security risk to tamper with the permissions around that, really.
A good start would be:
$ vsql -U <user> -w <pwd> -h 127.127.127.128 -d dbnm -x -c "select * from host_resources"
-[ RECORD 1 ]------------------+-----------------------------------------
host_name | 127.127.127.128
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 10
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1034915840
total_buffer_memory_bytes | 523386880
total_memory_cache_bytes | 5516861440
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 425185
disk_space_used_mb | 76682
disk_space_total_mb | 501867
system_open_files | 1440
system_max_files | 798044
-[ RECORD 2 ]------------------+-----------------------------------------
host_name | 127.127.127.129
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 9
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1836150784
total_buffer_memory_bytes | 487129088
total_memory_cache_bytes | 4774060032
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 447084
disk_space_used_mb | 54783
disk_space_total_mb | 501867
system_open_files | 1408
system_max_files | 798044
-[ RECORD 3 ]------------------+-----------------------------------------
host_name | 127.127.127.130
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 9
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1747091456
total_buffer_memory_bytes | 531447808
total_memory_cache_bytes | 4813959168
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 451444
disk_space_used_mb | 50423
disk_space_total_mb | 501867
system_open_files | 1408
system_max_files | 798044
In general, check the Vertica docu for system tables and information available there....
I am getting
org.apache.solr.common.SolrException: Could not load config file
C:\nemoCode\sceneric-hybris\hybris\config\solr\embedded\solrconfig.xml
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:530)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:251)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:243)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at java.util.concurrent.FutureTask.run(FutureTask.java:262)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.FutureTask.run(FutureTask.java:262)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.lang.Thread.run(Thread.java:745)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or 'C:\nemoCode\sceneric-hybris\hybris\config\solr\embedded\conf'
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:342)
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:288)
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.Config.<init>(Config.java:116)
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.Config.<init>(Config.java:86)
INFO | jvm 1 | main | 2017/05/23 11:54:01.553 | at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:139)
INFO | jvm 1 | main | 2017/05/23 11:54:01.553 | at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:527)
INFO | jvm 1 | main | 2017/05/23 11:54:01.553 | ... 9 more
But I dont have that path in my c drive. Where it is configured that it should serach from that perticular file path???
I think that the path is configured inside your solr.xml , you will find it inside ${HYBRIS_CONFIG_DIR}/solr/embedded/solr.xml
The solr.xml file specifies configuration options for each Solr core, including configuration options for multiple cores. The file also contains mappings for request URLs, and indicates which cores to load when the server starts.
so check the instanceDir and dataDir of one of your cores
An example of core inside solr.xml
<core name="master_apparel-de_Product"
instanceDir="A:\source\hybris.5.2.0\hybris\config/solr/embedded"
dataDir="A:\source\hybris.5.2.0\hybris\data\solrfacetsearch\MASTER\apparel-de_Product_1"/>
This is not a usual location for solrconfig.xml. Usually the location is:
[solr.home]/[corename]/conf/solrconfig.xml
It is possible to deviate from that by changing config property in the core.properties files that is just in the [corename] directory. That location can be relative which is what may be causing you some issues.
in order to separate the hybris logs from console (catalina) wrapper (tanuki) and tomcat I created these two configs.
#local.properties
log4j.appender.FILE = org.apache.log4j.DailyRollingFileAppender
log4j.appender.FILE.File = ${HYBRIS_LOG_DIR}/tomcat/hybris.log
log4j.appender.FILE.Append = true
log4j.appender.FILE.DatePattern = '-'yyyy-MM-dd
log4j.appender.FILE.layout = org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern = %d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%-5p|%X{RemoteAddr}|%X{TomcatSessionId}|%c] %m%n
log4j.rootLogger=INFO, FILE
This was put into the local.properties and creates logs in hybris.log
And I also created this:
#log4j_init_tomcat.properties
log4j.appender.TOMCAT_FILE = org.apache.log4j.DailyRollingFileAppender
log4j.appender.TOMCAT_FILE.File = ${HYBRIS_LOG_DIR}/tomcat/tomcat.log
log4j.appender.TOMCAT_FILE.Append = true
log4j.appender.TOMCAT_FILE.DatePattern = '-'yyyy-MM-dd
log4j.appender.TOMCAT_FILE.layout = org.apache.log4j.PatternLayout
log4j.appender.TOMCAT_FILE.layout.ConversionPattern = %d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%-5p|%X{RemoteAddr}|%X{TomcatSessionId}|%c] %m%n
log4j.rootLogger=INFO, TOMCAT_FILE
which is placed in the log4j_init_tomcat.properties which is load in the tanuki wrapper as
wrapper.java.additional.22=-Dlog4j.configuration=file:%CATALINA_BASE%/conf/log4j_init_tomcat.properties
Tomcat.log file is created but empty and I don't see any reason why.
This is an output from the console log file:
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Reading configuration from URL file:../conf/log4j_init_tomcat.properties
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Parsing for [root] with value=[INFO, TOMCAT_FILE].
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Level token is [INFO].
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Category root set to INFO
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Parsing appender named "TOMCAT_FILE".
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Parsing layout options for "TOMCAT_FILE".
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%-5p|%X{RemoteAddr}|%X{TomcatSessionId}|%c] %m%n].
INFO | jvm 1 | main | 2015/09/16 22:36:32.711 | log4j: End of parsing for "TOMCAT_FILE".
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [datePattern] to ['-'yyyy-MM-dd].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [append] to [true].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [file] to [/opt/hybris/log/tomcat/tomcat.log].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: setFile called: /opt/hybris/log/tomcat/tomcat.log, true
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: setFile ended
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Appender [TOMCAT_FILE] to be rolled at midnight.
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Parsed "TOMCAT_FILE" options.
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Finished configuring.
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Parsing for [root] with value=[INFO, FILE].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Level token is [INFO].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Category root set to INFO
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Parsing appender named "FILE".
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Parsing layout options for "FILE".
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%-5p|%X{RemoteAddr}|%X{TomcatSessionId}|%c] %m%n].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: End of parsing for "FILE".
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [append] to [true].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [file] to [/opt/hybris/log/tomcat/hybris.log].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Setting property [datePattern] to ['-'yyyy-MM-dd].
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: setFile called: /opt/hybris/log/tomcat/hybris.log, true
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: setFile ended
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Appender [FILE] to be rolled at midnight.
INFO | jvm 1 | main | 2015/09/16 22:36:32.812 | log4j: Parsed "FILE" options.
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Parsing for [org.apache.cxf] with value=[WARN].
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Level token is [WARN].
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Category org.apache.cxf set to WARN
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Handling log4j.additivity.org.apache.cxf=[null]
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Parsing for [de.hybris.platform.print.comet.utils.StopWatch] with value=[ALL].
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Level token is [ALL].
INFO | jvm 1 | main | 2015/09/16 22:36:32.912 | log4j: Category de.hybris.platform.print.comet.utils.StopWatch set to ALL
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Handling log4j.additivity.de.hybris.platform.print.comet.utils.StopWatch=[null]
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Parsing for [print.soap.logging] with value=[ALL].
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Level token is [ALL].
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Category print.soap.logging set to ALL
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Handling log4j.additivity.print.soap.logging=[null]
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Parsing for [your.package] with value=[debug].
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Level token is [debug].
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Category your.package set to DEBUG
INFO | jvm 1 | main | 2015/09/16 22:36:32.913 | log4j: Handling log4j.additivity.your.package=[null]
I am doing all this for logstash so it can be logged properly.
I hope someone can help me!
With kind regards,
Fide
Can you try this ..
log4j.rootLogger=INFO,TOMCAT_FILE,FILE
log4j.logger.TOMCAT_FILE=INFO,TOMCAT_FILE
log4j.appender.TOMCAT_FILE.Threshold=INFO
log4j.additivity.com.baseframework=false
log4j.appender.TOMCAT_FILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.TOMCAT_FILE.File=${HYBRIS_LOG_DIR}/tomcat/tomcat.log
log4j.appender.TOMCAT_FILE.Append=true
log4j.appender.TOMCAT_FILE.DatePattern = '-'yyyy-MM-dd
log4j.appender.TOMCAT_FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.TOMCAT_FILE.layout.ConversionPattern=%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%-5p|%X{RemoteAddr}|%X{TomcatSessionId}|%c] %m%n