I have a Solr installation running on JVM 1.6.0_18 and I would like to migrate into a much more powerful machine where it will share a 1.6.0_21 JVM with another application (Solr and the other application won't share the same Tomcat instance btw).
Will this pose any problems? Are the JVM requirements documented anywhere?
I think you will be fine. But if someone wants to upgrade it above 1.6.0_21, maybe you should go to 1.6.0_29 and not look back.
Because after _21 until _29, the code that lucene uses to read variable-length integers (used all the time in search!) is sometimes wrongly compiled by hotspot... we tried to add a hack/workaround (manually unroll it to dodge the bugs) but in general I would just avoid these versions, see https://issues.apache.org/jira/browse/LUCENE-2975
In response to your questions about "JVM requirements", lucene doesn't have "special" JVM requirements, only that we have lots of tests that actually execute things more than 10,000 times, and have found bugs in particular versions you should avoid, thats all.
As of posting this comment, I only know of minor issues with 1.6.0_29 and 1.7.0_01. So I would really recommend these as some major bugs previously affecting lucene are fixed there.
Related
I’ve inherited a project that uses a Solr 3.6.0 deployment. (Several
masters and several slaves – I think there are 6 Solr instances in total.)
I’ve been tasked with investigating if upgrading our 3.6.0 deployment will improve performance – there’s a lot of data and things are getting slow, apparently.
I’ve read Apache docs that from 3.6.x to 4.x there were improvements in scalability and performance.
I see that from 4.x to 5.x that Solr is now a standable server and no longer just a WAR running on Tomcat.
ISSUES:
A. Is it worth upgrading to 4.x or 5.x? Will I see a big improvement in performance?
B. Should I got to 4.x or 5.x? Will 4.x be an easier upgrade path since it's just a new WAR file?
C. In a nutshell ... what will the upgrade path look like, what kind of steps am I in for, and how can I avoid trouble?
Any help is GREATLY appreciated!!
It depends.
If stuff runs fine today and you don't need the extra performance or functionality - there is no good reason to touch it. You're so far behind that upgrading will be a large thing anyway. If you need the performance or the functionality, yes. It's worth it. You'll see an increase in performance and you can use a more recent Java version (current trunk of Lucene requires Java8).
If you can't reindex easily, you'll have to go through 4.x to get to 5.x anyway, since the current version of 5.x might not be able to read your current index format. If you can reindex easily, go directly to 5.x. You'll have to deal with the change in how Solr is being run at some time, better do it now when you're doing the breaking upgrade anyways.
Have a backup, replicate the current environment to a experimental server and try the update there. If you can easily reindex, set up 5.x and index to it and see if you can run the application straight from that backend instead (in development, not production). If it works (and you can reindex easily), create a Solr 5.x instance to run in parallel to your current installation, reindex to that and switch over production after confirming that it works in dev.
If you can't reindex, create a development clone of the current core and try to find the upgrade path that is able to upgrade the index files as you're going along. You're going to have to read a bit of documentation and try out different versions to get a proper migration of the index file format going.
I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page
I have installed Liferay-Tomcat 6.0.6 on one of my Linux machine having 4GB of RAM and it uses MySQL installed on a different machine.
The liferay runs really slow even for 10 concurrent users.
I have attached the screen shot taken from the AppDynamics which shows EhCache and C3PO both are responding slow at times.
Are there any special config required for EhCache or C3PO??
I am currently running with default configurations.
can you tell me the methodology for deriving those stats? In most cases, this behavior comes down to poor code implementation in a custom portlet or a JVM configuration issue and more likely the latter. Take a look at the Ehcache Garbage Collection Tuning Documentation. When you run jstat, how do the garbage collection times look?
I'm nearling completion of my first CakePHP-driven website and just saw they're already working on CakePHP 2.0 (not the stable release yet).
My questions:
Is it incredibly time consuming to move to a new version of CakePHP (when it becomes the "stable" release that is)? I know they have migration guides, but - I've never used a framework before, so I've never had to migrate anything.
Do you migrate your code for existing projects, or leave it as is and use the new stable version for future projects only?
Where can I find what version of CakePHP I currently have installed? I've looked at the LICENCE and VERSION files, but cannot find the installed/current version listed in them.
These seem like simple questions, but I greatly appreciate any thoughts/advice - searching this on Google just brings up how-to-migrate pages, not pros/cons...etc.
I've migrated a few sites from CakePHP 1.2 to 1.3. In my experience, it takes 2-3 hours on sites that have 5-10 controllers and no custom plugins, etc. I find I typically only have to change the syntax on a handful of function calls, and when I figure out which ones, it is just a matter of doing a find / replace across the site. Of course it could be more of an issue going from 1.3 to 2.0, but I don't get the sense that it will be an especially drastic API change.
UPDATE: I'm now in the process of migrating to CakePHP 2.0 beta, and thought I should update this, as I'm finding the updates are more extensive and far-reaching than I had assumed when I wrote this. Migration guide here: https://github.com/cakephp/docs/blob/master/en/appendices/2-0-migration-guide.rst
ANOTHER UPDATE: Since people seem to be finding this useful, I just thought I'd point out that Cake now helpfully provides an upgrade shell that does some of the work for you. Note that although the documentation says it will do "most" of the work, I have found there are still quite a few function calls, etc. that will need to be updated manually (see migration guide).
http://book2.cakephp.org/en/console-and-shells/upgrade-shell.html
As dhofstet said, it will all depend on the size and complexity of your site.
Whether you upgrade at all is usually a judgment call, but sometimes you have to (e.g. Cake 1.2 has some code that will break if your host upgrades to PHP 5.3). You certainly wouldn't have the kind of security issues that an old WordPress, Drupal, etc install would have. I have seen some noticeable speed increases with Cake upgrades, so depending on the situation it could be worth the trouble just for that (Cake 2.0 finally drops PHP 4 suppport). Look at the release notes and see if there are things that appeal to you in the new version.
To see your version, in the cake/VERSION.txt file, look at the very last line. It's easy to miss, but it should just be a number, e.g. 1.3.8.
This question is difficult to answer as it depends on the size and complexity of your project(s). The "big" releases (from 1.1 -> 1.2, 1.2 -> 1.3, 1.3 -> 2.0) usually break stuff and so you have to budget some migration work. The migration between "smaller" releases (for example from 1.3.9 to 1.3.10), on the other hand, is usually easy, often it just means to replace the cake folder. In both cases it is useful to have tests.
I migrate the projects which are actively maintained.
You can find the CakePHP version in cake/config/config.php
I'm migrating an app 1.3»2.0rc1 right now and I got no big trouble.
I had to change names of folder/files, eg. app_controller.php » Controller/AppController.php
Follow the migration link (tmp link) http://book2.cakephp.org/en/appendices/2-0-migration-guide.html
plugins/components/.. from various source won't work (at minus, for point 1)
To update the code (which in my case wasn't needed as the app worked well) I've shell-baked a dummy table and looked at differences in code.. It's a good starting point
Authentication/Authorization changed in some config, but requires few changes.
Trees still working
Acl don't. But I'm quite sure it's my fault.
For now it's all, good work!
We are in the process of migrating our bug tracking to Bugzilla from a really old version of track and I am running out of Advil.
We have a legacy application that has been around for a long time. Mix in the fact that our versioning management has been through a few iterations it generated a lot of different versions in the wild. To make matters worse, because of contractual limitations it is not always possible to upgrade the clients to the latest and greatest, so we must branch, fix, test and release, on the version they currently have, yielding yet another version number.
The end result is that the version combo box is ludicrously long. Lastly, for various reasons, we want to track three different version information :
the version in which the bug was found (version), the version in which we plan to fix (milestone) the bug and the version in which it has ultimately been fixed (open to suggestions). here is my problem in fact... this can actually be multiple numbers where we did a retroactive fix for some of these customers (this happens VERY often).
This is where I need your collective wisdom :
How do you keep track of these versions (found, planned and multiple fixed) in Bugzilla?
What are the best practices around linking versions and bug tracking ?
Answers
It seems that cloning the bug for each version is a good way to track, thus the target version is always tracked in the milestone as well as the fixed version, and the buggy version is always the native version.
Also to have each clone block the original bug make it a good way to trace the history back to the original submission.
Although I have accepted the answer I still welcome your input.
Often, if we need to fix something in multiple released versions (generally branches in the source code repository), the bug will be cloned for each branch so that all the commits and release status can be tracked separately. I think the only time we don't do this is when the change is not directly related to the codebase itself and cannot be fixed simply by updating our libraries.
As for version tracking in general, this has struck me as a reasonable way to do things, given that we generally only need to support 2-3 major versions (plus the trunk) at any time. If you have multiple disjoint versions that need supporting, e.g. customer-specific deployments, then things are going to be harder to track. (Arguably this is going to cause headaches in general and it would be better to unify things to a more central version theme).
I use Bugzilla to keep track not only of bugs, but also of new features, enhancements, and vague ideas. For each planned and released version, I have a Tracking Bug (something that I saw on the original Mozilla bugzilla, and found to be useful).
So if you have a bug report, you enter the bug with the version number that it was reported. Create additional bugs (one for each version you plan to fix it in) which all depend on (block) the original bug and block the version-specific Tracking Bugs.
If all bugs blocking the original bug are closed/verified (whatever your QA implements), you can also close the original bug.
I was looking for a similar feature in TFS, and while doing some investigation, I found that there is an enhancement request to manage "sightings" in Bugzilla:
"Bug 55970 - (bz-branch) Bugzilla needs to deal better with branches (implement sightings)":
https://bugzilla.mozilla.org/show_bug.cgi?id=55970
There is also a proposed design:
https://bug55970.bugzilla.mozilla.org/attachment.cgi?id=546912
For information, we are going to implement something similar in TFS 2010, with a "Bug Parent" or "Bug Master" to hold the information about the bug itself (repro steps, severity, technical info, impacted components...), that can have child of type "Bug Child" or "Sighting" that will hold the information specific to a given branch (target milestone, priority, specific information for that branch...).
We are using jira and still have this problem. I think it is a question of requirements and how are versions used rather than any one tool.
Who uses versions and how do they use them?
How are versions related to milestones in a project plan?
We use a 4 dectet version (major.minor.patch.buildNO). buildNo is the SVN head revision # at time of build. Each version is stored in JIRA and issues have an affects version and fixed-in version field that's a multi select.
After a short while we have many versions. Jira does allow us to control the list in two ways
1. Archive versions (greyed out from pick list)
2. Merge versions (rolls several versions together into a new version - no undo)
We have used Archive, but have avoided Merge due to the lack of the undo. So we still have a list of many many versions.
I'm sure you could probably accomplish a merge action in Bugzilla with some scripting and time, the question is: when is it OK to merge several older versions together?
If I have released, do I need to know that I have 17 builds between start and release? Do I need to keep the knowledge of a bug being found in build 1, fixed in 2, found again in 7, fixed again 9? Or is Found in release 1.0.0 fixed in release 1.0.1 good enough?
i'm going to ask a large question on this topic later on today, but I know the basic answer already:
- Depends on how your team wants to track things.
Implementation is fun, but it all comes down to requirements, goals and working back from user experience to solution. Which is rough when people don't necessarily know how that want to use something that doesn't quite exist in the form they'd like to use.
I have created a custom field (string) to list to version(s) (as V.M.P.B) where a bug has been fixed.
I have created also another custom field (string) to list to version(s) affected by a bug.
Doing that you are able to perform quick-search on specific version.