database=admin command={:ismaster=>1} (4002.9714ms)
i see this happening every ms maybe I'm exaggerating it happens every 1s.
The problem is that when you have replicaset located in different physical locations you get slow responses on that query thats happening every second on all replicasets(btw, replicasets are not defined as hosts in mongoid config, but it figures them out from the replica set config, and starts the crazy task).
1-I'm wondering if i can turn that option off?
I believe this is happening to keep mongoid up to date on which node it should eventually write to. I believe this is an old version of Mongoid 3.0*. the newer versions should not have that problem.
I'm not also sure if newer versions of mongoid handles primary changes correctly.
I must say its always better to use the language driver! mongo ruby driver in this case!
Related
We have some sites which use solr as an internal search. This is done with the extension ext:solr from DKD. Within the extension there is an install script which provides core for multiple languages.
This is working well on most systems.
Meanwhile we have some bigger sites and as there are some specialities we get problems:
We have sites which import data on a regulary base from outside of TYPO3. To get the solr index up to date we need to rebuild the complete index (at night). But as the site gets bigger the reindex takes longer and longer. And if an error occurs the index is broken the next day.
You could say: no problem just refresh all records, but that would leave information in the index for records which are deleted meanwhile (there is no 'delete' information in the import, except that a deleted record is no longer in the import. So a complete delete of all records before the import (or special marking and explicit deletion afterwards) is necessary.
Anyway, the reindex takes very long and can't be triggered any time. And an error leaves the index incomplete.
In theory there is the option to work with two indices: one which is build up anew and the other one is used for search requests. In this way you always have a complete index, so it might be not up to date. After the new index is build you can swap the indices and rebuild the older one.
That needs to be triggered from inside of TYPO3, but I have not found anything about such a configuration.
Another theoretic option might be a master-slave configuration, but as far as I think about it:
when the index of master is reset to rebuild it, this reset would be synchronized to slave which looses all the information it should provide until the rebuild is complete.
(I think the problem is independent of a specific TYPO3 or solr version, so no version tag)
you know about our read and write concept introduced in EXT:Solr 9 https://docs.typo3.org/p/apache-solr-for-typo3/solr/11.0/en-us/Releases/solr-release-9-0.html#support-to-differ-between-read-and-write-connections ?
Isn't it something for your case?
The only one thing what you need is to setup it in deployment properly.
If your fresh Index is finalized and fine and not broken, you just switch the read core to read from previous write one.
Some colleagues were using an Excel file to keep track of some issues, and they have decided to switch to a better system, asking me to setup a Jira project for them and to import all the tickets. A way or the other I have done everything, but the resolution date is now wrong, because it's the one of when I ran the script to import them into Jira. They would like to have the original one, so that they can know when an issue was really fixed. Unfortunately there's no way to change it from Jira's interface, so I have to access the DB directly. The command, for the record, is like:
update jiraissue
set RESOLUTIONDATE = "2015-02-16 14:48:40"
where pkey = "OV001-1";
Now, low-level writes to a database in general are dangerous, and I am wondering whether there can be any risks. Our test server is not available right now, so I'd have to work directly on the production one. One thing I had seen on our test server is that this seemed to work, except that JQL queries such as
resolved < 2015-03-20
are wrong because they still use the old Resolution date. Clearly, I have to reindex; but I'm wondering whether it is safe. Does Jira perform some consistency checks? Like, verifying that a ticket is solved after it is created. In my case, since I have modified the resolution date but not the creation, it is a clear inconsistency. Will Jira complain about this? Is there the risk to corrupt the DB? And if I also modify the creation date, do I have to watch out for other things?
We are using Jira 5.2.11.
I have access to the test server again, and I have tried it. I have modified all the RESOLUTIONDATE fields I had to fix, and when I reloaded the page the new date was there. Jira didn't complain about anything. I reindexed the server, so that queries yield correct results, and I saw no issues. Then I even ran the integrity checks (Administration -> System -> Integrity Checker), and no error was found.
Finally I did the same on the production server, and again everything is running fine.
I can therefore conclude that the operation is not dangerous at all, and it can be done safely.
We have constant problem with project XML file (*.sqlproj). If the files are added/renoved/changed location then it automatically adds/removes records in some unexpected places. After that we have big troubles by merging it when somebody changes that file also.
We came to conclusion that we might sort it before checkin. We would alphabetically sort it and in that case merge tool will understand it much better.
So, my questions would be:
Is it possible to re-arrange sqlproj file somehow before EVERY check-in? Maybe there are somekind of options/tools that doing that already?
Are there any other ways to make developers life easier?
UPDATE:
Once again I got the same problem. sqlproj file was modified 3 times and I want to merge to production only the last change, other 2 are not tested yet. in the merge tool I have the option to add all these 3 new objects or leave it without changes. I am not able to select only the last change ...
EXAMPLE:
developerA created tableA and checked in;
developerB got the latest version of dev branch, created tableB and checked in;
developerC got the latest version of dev branch, created tableC and checked in. DeveloperC tested the code and ready to go to production. He tries to merge his code to QA and get's the conflict where he has an option only to go with ALL changes.
I understand the scenario you are running into very well. This typically happens when you have multiple work streams happening in the context of a single repository and you don't have a common promotion schedule (as in all work will go to QA at the same time and PROD at the same time).
There's a few ways I can think to get around this problem and there are pros and cons to each option.
Lock each environment until everything can promote together. Not realistic in most cases.
When you are ready to promote, create a promotion branch from source environment and take things out of the promotion branch that aren't ready to promote to destination environment. This allows devs to keep working and be able to promote without freezing.
Hybrid approach... Don't source control anything in Dev until it's ready to promote to test. Then either do option #1 or 2 from there onward.
Create a more flexible ecosystem that can spin up an environment for each Feature branch in order to demo/test with others(or at least allocate/rotate enough between the developers to accomplish the same objective). Once it's accepted promote. This is what we are working towards currently but building out the infrastructure and process when you have a ton of interconnected databases and apps that share them is a bit challenging to say the least (especially in the Microsoft world).
Anyways hopes this helps...
1 - what source control are you using? No source control that I am aware of understands the context of sqlproj files but this isn't normally a problem.
2.a - This shouldn't be a problem you get constantly, are you checking in/out regularly? I would only expect to see issues if different developers are making large scale changes to the projects and not checking out / checking in before and after.
2.b - It is also possible you are not merging correctly, if you take both both sets of changes then it is normally fine.
ed
I've run into a situation where we sometimes have to completely wipe an index and then re-index a collection. This process of course takes a lot of time. I don't want to allow for any or at least extended downtime in Prod. Thus, I am researching into a way in Solr to create a new Collection that is a copy of an old collection, but without the data. I can re-index this new Collection with little or any service degradation. I then want to use aliases to point the new Collection to the alias that our clients are using so that they will start using the new Collection without even knowing it.
I'm currently running 4.2, but wondering if I shouldn't upgrade to 4.7 in order to support this better. Seems like 4.2 has most of the same Collection API support.
One of the first snags I'm hitting is that the Collection I am copying has a lib folder in it with customer libraries. If possible I want to push these out to the solrhome/lib folder so that they only get loaded once. My problem with this is that if I have different versions of say a custom data importer, then I will run into classloader issues.
Has anyone successfully implemented this kind of scenario and could provide some insight into the pitfalls and successes that you had and what worked for you?
More Details...
I have many different collections that are part of this Solr cloud. I don't want to effect any of the other collections if possible while making changes to the new copied collection.
I'm also having a similar kind of situation where I might to modify solr schema and need to re index the whole data. But I don't have much downtime in production. So, we came up with a solution like..
Let's say I have a SolrCloud1 (existing one), with collection1 (It has it's own structure). I have my application running in different machine. There is a load-balancer in between my SolrCloud1 and application.
Now, create a separate SolrCloud (say, SolrCloud2) with collection1. Maintain the same structure as it was previously. Now, do the re indexing part in this SolrCloud2. When it's done, make available the new SolrCloud under the load-balancer. When the new SolrCLoud2 is up, shut down the SolrCloud1.
Thus without any production down time you'll re index the data. Users won't be able to know anything about this. Hope this will help.
Mongoid only seems to have an app wide Mongoid.allow_dynamic_fields setting. But I want every collection to decide whether it wants dynamic fields or not. How can I do that?
This is not possible in mongoid 3.x yet even though there is a pull request already for it, https://github.com/mongoid/mongoid/pull/2563. This will be released on mongoid 4.0. However if you really need that feature you could checkout those 2 commits in a local folder or in a fork , an use it.