Guys i am actively working on migrating an application running on JBoss AS 5 to JBoss AS 7.
After migration, I have noticed that the database calls took a huge performance hit.
I was using ojdbc14 with a pool(min - 5 max - 100) on JBoss 5 and it was working pretty good for us.
With JBoss AS 7, I have installed the driver as a module.
All my queries are nto taking 10 times longer.
For ex, if a query took 30ms while I was on JBoss AS 5 , it is taking 400-600ms on JBoss AS 7.
I have tried with both the drivers shown in the config below (ojdbc6, ojdbc14)
An Observation is that , the perfomance decrease is more noticable on a linux box machine than a OS X box.
FYI. I have tried running JBoss AS 7 on both Java 1.6 and 1.7
My application itself contains:
struts 2 front end (used for request processing, no web ui involved)
Session Beans in the backend
QuartzPlugin for batch jobs.
a custom MBean
I have tried
lowering and increasing the pool size
prefill = true and false
switching between ojdbc6 and ojdbc14
standalone.xml
<subsystem xmlns="urn:jboss:domain:datasources:1.0">
<datasources>
<datasource jta="true" jndi-name="java:jboss/datasources/MyDS" pool-name="hive-datasource" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>jdbc:oracle:thin:#host:port:service</connection-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<connection-property name="defaultRowPrefetch">
50
</connection-property>
<driver>oracle14</driver>
<pool>
<min-pool-size>20</min-pool-size>
<max-pool-size>100</max-pool-size>
<prefill>true</prefill>
<use-strict-min>false</use-strict-min>
<flush-strategy>FailingConnectionOnly</flush-strategy>
</pool>
<security>
<user-name>theusername</user-name>
<password>thepassword</password>
</security>
<validation>
<check-valid-connection-sql>select * from dual</check-valid-connection-sql>
<validate-on-match>false</validate-on-match>
<background-validation>false</background-validation>
<use-fast-fail>false</use-fast-fail>
<exception-sorter class-name="org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter"/>
</validation>
<timeout>
<set-tx-query-timeout>true</set-tx-query-timeout>
<blocking-timeout-millis>300000</blocking-timeout-millis>
<idle-timeout-minutes>30</idle-timeout-minutes>
</timeout>
<statement>
<track-statements>false</track-statements>
<prepared-statement-cache-size>0</prepared-statement-cache-size>
</statement>
</datasource>
<drivers>
<driver name="oracle6" module="com.oracle.ojdbc6">
<xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
</driver>
<driver name="oracle14" module="com.oracle.ojdbc14">
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
</driver>
</drivers>
</datasources>
</subsystem>
We see issues with upgrades changing database performance quite often. The key is to pin down how database queries have changed between JBoss 5 and JBoss7. The fact that you are seeing a more substantial degradation on one OS and lesser on another is not surprising since every OS has its strengths and weakness when it comes to processing efficiency.
My suggestion would be to ascertain high visibility on the database, including the queries that are causing the biggest bottlenecks with JBoss 7 and the top Wait Events. In case you are not familiar, Oracle breaks query execution down into discrete steps called Wait Events. These can be anything from waiting on a table lock to Disk I/O time taking events. There are close to 1000 Wait Events in Oracle, so getting this information manually and correlating the wait event with the query and hardware resources can be very difficult.
Here is a link to a free version of the Ignite database monitoring software that should help you out http://www.ignitefree.com
For example, an operation that took 2ms on JBoss5/Java6 is taking 2-5sec on JBoss7/Java6 or 7. This happens only when i run on a linux machine. the machine itself have 8Gig+ free memory and is a Xeon processor. Everything runs normal when i run it on an OS X server/laptop
Related
After dbt upgrade on our project (dbt-core==1.0.4 -> dbt-core==1.3.2) I observed huge performance decrease when models are built. Basically, our 84 models would run for 1h20min with 1.0.4 version, but it runs without an end with version 1.3.2.
I have checked for any breaking changes between the versions, but haven't found a clue why the performance issue occurred. I would greatly appreciate any advice what to check/change.
Some information:
we have 258 models, 3104 tests, 7 snapshots, 0 analyses, 16 seed files, 190 sources, 0 exposures, 0 metrics. Process running models (dbt run) is the main issue, as it doesn't go through due to performance decrease.
It seems like with every next model it takes longer for it to process. First ~10 models run with the same time as with version==1.0.4, but as long as the process goes, the worse performance gets. Simple model that is being built as #33 model would take 10 seconds with version 1.0.4, however it takes over 3000 seconds with version 1.3.2
we use SQLServer db engine with dbt-sqlserver==1.3.0 adapter for 1.3.2 version.
Performance decrease is observed on both our servers and on my local testing environment, which would imply there is something wrong with our dbt project, not the machine itself.
We use 2 threads and ODBC Driver 18 for SQL Server driver.
I compared SQL Scripts generated by dbt between 1.0.4 and 1.3.2 versions and they were with no changes.
I have studied the release notes for dbt-core and dbt-sqlserver releases but haven't found a clue what change could decrease the performance. I've also tried upgrading to lower versions (1.2.X) but it resulted with the exactly same huge performance decrease.
We use a very old solr 4.2.0 system on OpenJDK6. I am battling a sloooowness issue on our system First I thought it was database, so we tuned all the queries. It did not help. Finally I tracked down to the Solr system performance that is causing the issue. It happens everyday exactly at the same time 8:58am CST, I see a spike in CPU usage and users feel slowness which is significant. And that happens to be the peak usage time with most of the surgeons on the system. I am in the process of migrating our system to a newer platform with Elasticsearch. In the meantime I need to keep this system going. I thought this is related to the GC on Solr. I know this is an age old topic discussed at length. What I see running on the system is the following command, with that what is my heap size, it has two switches Xmx128M and Xmx12500m, which take precedence? My machine has 32GB memory, I can bump this up if necessary. I am not sure where this 12500m is getting set. i looked at the /etc/init.d/tomcat6 script, it is set to 128M there. Not sure where this other setting is getting picked up from? We are on a Ubuntu system in AWS. Thanks for any help.
tomcat6 24483 17.3 44.7 19723020 13805172 ? Sl Apr13 3597:31 /usr/lib/jvm/java-6-openjdk/bin/java -Djava.util.logging.config.file=/var/lib/tomcat6/conf/logging.properties -Djava.awt.headless=true -Xmx128M -XX:+UseConcMarkSweepGC -Xms10000m -Xmx12500m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/share/tomcat6/endorsed -classpath /usr/share/tomcat6/bin/bootstrap.jar -Dcatalina.base=/var/lib/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.io.tmpdir=/tmp/tomcat6-tmp org.apache.catalina.startup.Bootstrap start
Having issues in Soft Auto commit (Near Real Time). Am using solr 4.3 on tomcat . The index size is 10.95 GB. With this configuration it takes more than 60 seconds to return the indexed document. When adding documents to solr and searching after soft commit time, its returning 0 hits. Its taking long before the document actually starts showing up, even more than the autoCommit interval.
<autoCommit>
<maxTime>15000</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>1000</maxTime>
</autoSoftCommit>
Machine is ubuntu 13 / 4 cores / 16GB RAM. Given 6gb to Solr running over tomcat.
Can somebody help me with this?
If you are adding new document using solr client, try using commitWithin, its more flexible over autoSoftCommit.One more thing make sure the u have update log is enabled in solrconfig.xml and get handler as well.You can get more details here-http://wiki.apache.org/solr/NearRealtimeSearch
I'm indexing the content I have and after upgrade my Solr instance to solr 4 I'm facing some OutOfMemories. The exception thrown is:
INFO org.apache.solr.update.UpdateHandler - start commit{flags=0,_version_=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
ERROR o.a.solr.servlet.SolrDispatchFilter - null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:469)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:297)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:395)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:250)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:166)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.OutOfMemoryError: Java heap space
Is there some known bug or something I could test to get rid of it?
Within this upgrade two things changed:
solr version (from 3.4 to 4.0);
lucene match version (from LUCENE_34 to LUCENE_40).
Seems to be running out of memory when accessing logs, at a glance. That may not be particularly meaningful, with an 'Out of Memory' error, of course, but worth a shot, particularly after seeing this complaint regarding SOLR 4.0 logging. Particularly so if this is occuring during an index rebuild of some form, or heavy load of updates.
So try disabling the update log, which I believe can be done by commenting out:
<updateLog>
<str name="dir">${solr.data.dir:}</str>
</updateLog>
in solrconfig.xml.
EDIT:
Another (possibly better) approach to this, taking another glance at it, might be to commit more often. The growth of the update log seems to be directly related to having a lot of queued updates waiting for commit.
If you do not have autocommit enabled, you might want to try adding it in your config, something like:
<autoCommit>
<maxTime>15000</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
There's a good bit of related discussion and recommendation to be found on this thread.
I ran into the same problem today and after reading #femtoRgon's suggested thread, I changed the following in the solrconfig.xml
<autoCommit>
<maxTime>15000</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
to
<autoCommit>
<maxDocs>15000</maxDocs>
<openSearcher>false</openSearcher>
</autoCommit>
It no longer gives me that error. So it commits every 15,000 docs. Which in my case is frequent enough not to run into memory issues. In my MacBook Pro it took few minutes to index ~4m documents containing product information (so short documents).
I have installed Liferay-Tomcat 6.0.6 on one of my Linux machine having 4GB of RAM and it uses MySQL installed on a different machine.
The liferay runs really slow even for 10 concurrent users.
I have attached the screen shot taken from the AppDynamics which shows EhCache and C3PO both are responding slow at times.
Are there any special config required for EhCache or C3PO??
I am currently running with default configurations.
can you tell me the methodology for deriving those stats? In most cases, this behavior comes down to poor code implementation in a custom portlet or a JVM configuration issue and more likely the latter. Take a look at the Ehcache Garbage Collection Tuning Documentation. When you run jstat, how do the garbage collection times look?