I called splitshard, and now this is what I see even after posting a commit:
I thought splitshard was supposed to get rid of the original shard, shard1, in this case. Am I missing something? I was expecting the only two remaining shards to be shard1_0 and shard1_1.
The REST call I used was /admin/collections?collection=default-collection&shard=shard1&action=SPLITSHARD if that helps.
Response from the Solr mailing list:
Once the SPLITSHARD call completes, it just marks the original shard as
Inactive i.e. it no longer accepts requests. So yes, you would have to use
DELETESHARD (
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api7)
to clean it up.
As far as what you see on the admin UI, that information is wrong i.e. the
UI does not respect the state of the shard while displaying them. So,
though the parent shard might be inactive, you still would end up seeing it
as just another active shard. There's an open issue for this one.
One way to confirm the shard state is by looking at the shard state in
clusterstate.json (or state.json, depending upon the version of Solr you're
using).
Related
How can I invalidate a single item when working with useInfiniteQuery?
Here is an example that demonstrates what I am trying to accomplish.
Let`s say I have a list of members and each member has a follow button. When I press on to follow button, there is a separate call to the server to mark that the given user is following another user. After this, I have to invalidate the entire infinite query to reflect the state of following for a single member. That means I might have a lot of users loaded in infinite query and I need to re-fetch all the items that were already loaded just to reflect the change for one item.
I know I can change the value in queryClient.setQueryData when follow fetch returns success but without following this with invalidation and fetch of a member, I am basically going out of sync with the server and relying on local data.
Any possible ways to address this issue?
Here is a reference UI photo just in case if it will be helpful.
I think it is not currently possible because react-query has no normalized caching and no underlying schema. So one entry in a list (doesn't matter if it's infinite or not) does not correspond to a detail query in any way.
If you prefix the query-keys with the same string, you can utilize the partial query key matching to invalidate in one go:
['users', 'all']
['users', 1]
['users', 2]
queryClient.invalidateQueries(['users]) will invalidate all three queries.
But yes, it will refetch the whole list, and if you don't want to manually set with setQueryData, I don't see any other way currently.
If you return the whole detail data for one user from your mutation, I don't see why setting it with setQueryData would get you out-of-sync with the backend though. We are doing this a lot :)
I think that have seen in many occasions that a DynamoDB conditional put throws ConditionalCheckFailedException but succeeds. Usually in this scenario, the request takes quite long (~10s) to finish, but I can see that the request is updated despite the fact that a ConditionalCheckFailedException is thrown (and the it took few seconds).
By the way I don't force any timeout on the DDB request.
Is this a bug, or some DDB conditional put contract that I misunderstand? Has anyone experienced this issue?
Answering this late to inform others:
ConditionCheckFailedException but item is persisted:
This typically happens when you save an item to DynamoDB, DynamoDB acknowledges the write request but the response gets lost on the return path which can happen for multiple reasons, keeping in mind that DynamoDB is one of the largest distributed systems in the cloud.
This causes the SDK timeout to exceed while awaiting a response, which then triggers an SDK retry. When the write request is retried, the condition now evaluates to False as the item already exists, which in turn throws a ConditionCheckFailedException, which can cause confusion.
When I receive a ConditionCheckFailedException I typically do a strongly consistent GetItem request for the item to ensure it exists with the values I expect and move on.
I am trying to figure out how to create and delete nodes with Relay where I don't have a parent node. It seems that NODE_DELETE/RANGE_DELETE and RANGE_ADD all require a parent node. Is there a way to perform create and delete mutations from the root query object in Relay.js?
Note: I did find example where creates can be performed with a FIELDS_CHANGE query, but they lack any documentation or reason.
You should be able to use REQUIRED_CHILDREN for this purpose. It's not currently well-documented (or even documented), and it has a somewhat confusing name (as a result, we have a task for renaming it and improving the docs). It will likely be renamed to EXTRA_FRAGMENT in the future.
Normally when you issue a mutation, we perform an intersection between the "fat query" (all the fields that could possibly change as the result of the mutation) and the "tracked query" (all the fields that your app has requested for a node so far, and which should be updated when they change) and we send this query to the server with the mutation.
So, for the use case of creating an entirely new node with no parent, you can specify an identifying field like id in the REQUIRED_CHILDREN, and then use that to, for example, navigate to a view showing the newly-created object. This answer has a very detailed example of how you would do this.
You can pass client:root as the parentID. And then your pathToConnection would be ['client:root', 'someConnection'].
(Tested with Relay Modern. Not sure if this also applies to Relay Classic, but that's officially deprecated now anyways. But this is still one of the top Google results for this issue, so answering.)
(Found in this GitHub issue)
I'm trying to setup a Solr dataimport.EventListener to call a SOAP service with the IDs of the documents which have been added in the update event. I have a class which implements org.apache.solr.handler.dataimport.EventListener and I thought that the result of getAllEntityFields() would yield a collection of document IDs. Unfortunately, the result of the method yields an empty list. Even more confusing is that context.getSolrCore().getName() yields an empty string rather than the actual core name. So it seems I am not quite on the right path here.
The current setup is the following:
Whenever a certain sproc is called in SQL, it puts a message in a queue. This queue has a listener on it which initiates a program which reads the queue and calls other sprocs. After the sprocs are complete, a delta or full import operation is performed on Solr. Immediately after, a method is called to update a cache. However, because the import operation on Solr may not have yet been completed before this update method is called the cache may be updated with "stale" data.
I was hoping to use a dataimport EventListener to call the method which updates the cache since my other options seem far too complex (e.g. polling the dataimport URL to determine when to call the update method or using a queue to list document IDs which need to be updated and have the EventListener call a method on a service to receive this queue and update the cache). I'm having a bit of a hard time finding documentation or examples. Does anyone have any ideas on how I should approach the problem?
From what i understand, you are trying to update your cache as and when the documents are added. Depending on what version of solr you are running, you can do one of the following.
Solr 4.0 provides script transformer that lets you do this.
http://wiki.apache.org/solr/DataImportHandler#ScriptTransformer
With prior versions of solr, you can chain one handler on top of other as answered in the following post.
Solr and custom update handler
I've been trying to work out how to cancel a long-running AD search in System.DirectoryServices.Protocols. Can anyone help?
I've looked at the supportControl/supportedCapabilities attributes on RootDSE and they don't contain the 1.3.6.1.1.8 OID so I think that means it doesn't support the LDAP CANCEL extended operation as defined here: https://www.rfc-editor.org/rfc/rfc3909
That leaves the original LDAP ABANDON command (see here for list). But there doesn't seem to be a matching DirectoryRequest Class.
Anyone have any ideas?
I think I've found my answer: whilst I was reading around your suggestion, Martin, I came across the Abort method on the LdapConnection class. I didn't expect to find it there: starting out from the LDAP documentation I'd expected to find it as just another LDAPMessage but the MS guys seem to have treated it as a special case. If anyone is familiar with a non-MS implementation of LDAP and can comment on whether the MS approach is typical, I'd appreciate it to improve my understanding.
I think, but I'm not positive, there is no asynch query with a cancel. It has an asynch property but it's to allow a collection to be filled, nothing to do with cancelling. The best I can offer is to put your query in a background worker thread and put an asynch callback that will deal with the answer when it comes back. If the user decides to cancel, you can just cancel the background worker thread. You'll free your app up, even if you haven't freed the ldap server up until it finishes it's query. You can find info on background worker threads at http://www.c-sharpcorner.com/UploadFile/LivMic/BGWorker07032007000515AM/BGWorker.aspx
Don't forget to call .Dispose() when cleaning up your active directory objects to prevent memory leaks.
If the query will produce many data also, you can abandon them through paging. Specify a PageResultRequestControl option in the query, giving a fairly low page size (IIUC, 1000 is the default page size). IIUC, you'll send new requests every time you got a page (passing cookies from one response into the next request). When you choose to cancel the query, send another request with zero expected results.