We use a very old solr 4.2.0 system on OpenJDK6. I am battling a sloooowness issue on our system First I thought it was database, so we tuned all the queries. It did not help. Finally I tracked down to the Solr system performance that is causing the issue. It happens everyday exactly at the same time 8:58am CST, I see a spike in CPU usage and users feel slowness which is significant. And that happens to be the peak usage time with most of the surgeons on the system. I am in the process of migrating our system to a newer platform with Elasticsearch. In the meantime I need to keep this system going. I thought this is related to the GC on Solr. I know this is an age old topic discussed at length. What I see running on the system is the following command, with that what is my heap size, it has two switches Xmx128M and Xmx12500m, which take precedence? My machine has 32GB memory, I can bump this up if necessary. I am not sure where this 12500m is getting set. i looked at the /etc/init.d/tomcat6 script, it is set to 128M there. Not sure where this other setting is getting picked up from? We are on a Ubuntu system in AWS. Thanks for any help.
tomcat6 24483 17.3 44.7 19723020 13805172 ? Sl Apr13 3597:31 /usr/lib/jvm/java-6-openjdk/bin/java -Djava.util.logging.config.file=/var/lib/tomcat6/conf/logging.properties -Djava.awt.headless=true -Xmx128M -XX:+UseConcMarkSweepGC -Xms10000m -Xmx12500m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/share/tomcat6/endorsed -classpath /usr/share/tomcat6/bin/bootstrap.jar -Dcatalina.base=/var/lib/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.io.tmpdir=/tmp/tomcat6-tmp org.apache.catalina.startup.Bootstrap start
Related
My team is working on a search application for our websites. We are using Collective Solr in Plone to index our intranet and documentation sites. We recently set up shared blob storage on our test instance of the intranet site because Solr was not indexing our PDF files. This appears to be working, however, each time I run the reindexing script (##solr-maintenance/reindex) it stops after about an hour and a half. I know that it is not indexing our entire site as there are numerous pages, files, etc. missing when I run a query in the Solr dashboard.
The warning below is the last thing I see in the Solr log before the script stops. I am very new to Solr so I'm not sure what it indicates. When I run the same script on our documentation site, it completes without error.
2017-04-14 18:05:37.259 WARN (qtp1989972246-970) [ ] o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_284]
java.nio.file.NoSuchFileException: /var/solr/data/uvahealthPlone/data/index/segments_284
I'm hoping someone out there might have more experience with Collective Solr for Plone and could recommend some good resources for debugging this issue. I've done a lot of searching lately but haven't found much useful info.
This was a bug fixed some time ago with https://github.com/collective/collective.solr/pull/122
Unfortunately an Ubuntu machine I manage has stopped working, and after much work it seems like I'll have to reinstall the system. All the data from the old system is intact and backed up.
Among this data is a PostgreSQL installation with some databases (that was running isolated on this machine.) My goal is to move this data as is, and run it on the fresh install.
Since the old system is not running, I can't do a pg_dump.
According to this article it should be possible to mode the data folder, but there are two restrictions mentioned. What I do not fully understand is if this will be a problem for me?
I cant seem to find much information on this online, since all refer to the preferred pg_dump-method.
Any help would be highly appreciated.
As per the suggestions in the comments to the initial answer, the Postgres directories where copied to a new host and the database started without any problems.
I'm looking for a new Apache 2.4.x module or solution which allows me to configure Bandwidth Quota's. Long ago I use mod_throttle which only works for 1.3 and is since no longer maintained. I've also been using mod_cband which I've patched to work with Apache 2.4 and it seems to be doing the job but I'm worried about future Apache upgrades could cause this software to become extinct as well. It also appears mod_cband is no longer maintained either.
I've looked at mod_bandwidth but that only appears to work for 1.3.x and mod_ratelimit doesn't exactly do what I'm looking for.
Specifically I'm looking for a way to set a max quota for each virtualhost and when that limit is reached either the connection is slowed down OR an error is displayed. Quota should be configured to automagically reset itself based on a pre-defined key, IE: 30 days, 2 hours, etc...
Any guidance would be nice. Paid software is ok for me too, long as I can demo it. Opensource solution would be best, of course =)
I should add, this should be for Unix/Linux, not Windows!
Try using this
http://dembol.org/blog/mod_cband
manges banwidth and qouta + works on linux
I have a Solr installation running on JVM 1.6.0_18 and I would like to migrate into a much more powerful machine where it will share a 1.6.0_21 JVM with another application (Solr and the other application won't share the same Tomcat instance btw).
Will this pose any problems? Are the JVM requirements documented anywhere?
I think you will be fine. But if someone wants to upgrade it above 1.6.0_21, maybe you should go to 1.6.0_29 and not look back.
Because after _21 until _29, the code that lucene uses to read variable-length integers (used all the time in search!) is sometimes wrongly compiled by hotspot... we tried to add a hack/workaround (manually unroll it to dodge the bugs) but in general I would just avoid these versions, see https://issues.apache.org/jira/browse/LUCENE-2975
In response to your questions about "JVM requirements", lucene doesn't have "special" JVM requirements, only that we have lots of tests that actually execute things more than 10,000 times, and have found bugs in particular versions you should avoid, thats all.
As of posting this comment, I only know of minor issues with 1.6.0_29 and 1.7.0_01. So I would really recommend these as some major bugs previously affecting lucene are fixed there.
I have installed Liferay-Tomcat 6.0.6 on one of my Linux machine having 4GB of RAM and it uses MySQL installed on a different machine.
The liferay runs really slow even for 10 concurrent users.
I have attached the screen shot taken from the AppDynamics which shows EhCache and C3PO both are responding slow at times.
Are there any special config required for EhCache or C3PO??
I am currently running with default configurations.
can you tell me the methodology for deriving those stats? In most cases, this behavior comes down to poor code implementation in a custom portlet or a JVM configuration issue and more likely the latter. Take a look at the Ehcache Garbage Collection Tuning Documentation. When you run jstat, how do the garbage collection times look?