Is there any provision in azure search service to add trigger or provide notification on exceeding number of fields?
Because I am using "Basic" tier and it allows 100 fields and my application generates more than 100 fields. I am not sure but I believe after exceeding something around 900- 1000 fields it deleted my index automatically.
When you attempt more fields than permitted by the service tier quota, your Create Index or Update Index API calls will fail with the corresponding error message. Under no circumstances will Azure Search ever delete your index automatically.
After an RnD I have found that there is no any provision to add trigger, we need to upgrade from "Basic" tier to "Standard" or more.
Related
I have an application that uses Cloud Datastore via App Engine to save data.
I need to refresh the clients when an object is put on the database. To do it, after the object is put on the database, the server sends a sync message to the clients. The clients read the sync message and does a query to the server. The server does a Query to return the new result.
The problem is that when the Query is done, the put object doesn't appears on the query results. Reading the documentation, I suppose that the reason is that the put isn't on the Milestone B, see https://cloud.google.com/appengine/articles/transaction_isolation, because another later call object appears.
How can I know when a put reaches a "Milestone B"? If it isn't possible to know it, how can I do this logic (refresh clients after put)?
You can ensure up-to-date query results by using an ancestor query, or, if you know the key of the specific entity you need to retrieve, you can fetch it by key rather than using a query.
This page discusses the trade-offs of using ancestor queries.
The data do not appear in the result of your query because the indexes have not been updated yet.
There is some latency before the indexes will be updated and unfortunately there is no way to know when this will happen.
The only way to handle this case is to use the entity's key, that is the only index that guarantees to be updated as soon the entity it's stored.
https://cloud.google.com/appengine/docs/java/datastore/entities
I am building an Azure Chargeback solution and for that I am pulling the Azure Usage data from Azure Billing REST APIs for multiple subscriptions and different dates. I need to store this into custom MS SQL database as per customer’s requirements. I get various usage records from Azure.
Problem: From these Usage records, I am not able to find any combination of the columns in the data I receive which will give me a
Unique Key to identify a Usage record for a particular subscription
and for a specific date. Only column I see as different is Quantity
but even that can be duplicated. E.g. If there are 2 VMs of type A1
with no data or applications on them, in the same cloud service, then
they will have exact quantity of usage. I do not get the exact name
of the VM or any other resource via the Usage APIs.
One Custom Solution (Ineffective): I can append a counter or unique ID to the usage records but if I fetch the data next time the
order may shuffle or new data may be introduced thereby affecting the
logic for uniqueness. Any logic I build to checking if any data is
missing in DB will result in bugs if there is any alteration in the
order the usage records are returned (for a specific subscription for
a specific date).
I am sure that Microsoft stores this data in some database. I can’t find the unique id to identify a usage record from many records returned by the Billing API. Maybe I am missing something here.
I will appreciate any help or any pointers on this.
When you call the Usage API set the ShowDetails parameter to true: &showDetails=true
MSDN Doc
This will populate the instance data in the returned JSON with the unique URI for the resource which includes the name for example:
Website:
"instanceData": "{\"Microsoft.Resources\":{\"resourceUri\":\"/subscriptions/xxx-xxxx/resourceGroups/mygoup/providers/Microsoft.Web/sites/WebsiteName\",\"...
Virtual Machine:
"instanceData": "{\"Microsoft.Resources\":{\"resourceUri\":\"/subscriptions/xxx-xxxx/resourceGroups/TESTINGBillGroup/providers/Microsoft.Compute/virtualMachines/TestingBillVM\",\...
If ShowDetails is false all your resources will be aggregated on the server side based on the resource type, all your websites will show as one entry.
The resource URI, date range and meterid will form a unique key as far as I know.
NOTE: If you are using the legacy API your VMs will be aggregated under the cloud service hosting them.
We are trying to develop a SCIM enabled Provisioning system for provisioning data from an Enterprise Cloud Subscriber(ECS) to Salesforce(Cloud Service Provider-CSP). We are following SCIM 1.1 standard.
What are we able to do:
We are able to perform CRUD operations on User object using Salesforce auto-generated userId field
Exact Problem:
We are not able to update/delete User object using externalId provided by ECS.
Tried something as below... But it is not working, Unknown_Exception is thrown...
XXX/my.salesforce.com/services/scim/v1/Users/701984?fields=externalId
Please note that it is not possible to store Salesforce userId in ECS's database due to some compliance reasons. So we have to completely depend upon externalId only.
Possible Workaround:
Step1: Read the userId based on externalId from Salesforce
Step2: Update the User object using the salesforce UserId obtained in Step1.
But this two step process would definitely degrade the performance.
Is there any way to update/delete the User by externalId
Could you please guide us on this..
Thanks so much....
I realize this is old thread but wanted to note that you CAN update Users from REST using an external ID. The endpoint in above question is incorrect. Following is how it should be set, send as a PATCH request:
[instance]/services/data/v37.0/sobjects/user/[external_id__c]/[external id value]
Instance = your instance i.e. https://test.salesforce.com/
external_id__c = API name of your custom external Id field on User
external id value = whatever the value of the user's external Id
NOTES:
Salesforce responds with an HTTP 204 status code with No Content in the body, this isn't usual for patch requests, but it is 'success' response
The external id on user has to be a custom field, make sure it is set
as UNIQUE
Ensure the profile/permission set of the user that is making the call
has the Manage Users permission & has access to the external id field
It is pretty common pattern for other applications, too, to search first and then perform on update on the returned object. Your workaround seems fine to me. What performance problem are you concerned about? Are you concerned about Salesforce not being able to process more requests or are you concerned about the higher response time in your application because you need to make multiple requests? Have you actually measured how much an extra call costs?
I need to add full-text search capabilities to my existing database. Of course first turn is to something like Solr or Elastic Search. And the blocking point I’ve got to is – how to securely display results returned from underlying search engine (let’s think about Solr or Elastic Search for now, however any other solution or engine that hit the point are also appreciated).
The tricky context is that I have, for example, in my system Personal Profile records that are to be indexed. One of the fields in personal profile is – manager’s feedback. Normally in the system that field is visible only to employee’s direct manager and higher hierarchy, i.e. ‘manager’ from another branch will not be able to see that field. However, I want that field to be searchable via full text search but only for people who actually can see it.
Now I query Solr for ‘stupid’ (that is query string) and it returns me N documents. When returning that to end-user I’ll remove the ‘Manager’s feedback’ field because end-user is not the manager of given people – but just presence of the document in resultset is already the evidence of ‘stupid’ guys …
The question is – what is workable approach to handle that use-case? Is it possible to plug into Solr/ES with home-grown security filter for outputs?
Caveats:
filtering out only fields do not work because of above mentioned scenario
filtering out complete documents will not work because of
search engine does not tell which fields matched – therefore no way to manually filter resultset by field http://elasticsearch-users.115913.n3.nabble.com/Best-way-to-return-which-field-matched-td2713071.html
even this does work, removing documents from result set will spoil down facets (e.g. number of matches by department) returned by the engine – I’ll have to either recalculate facets manually or they will not match to manually filtered records and will reveal what I actually do not want to show to end users
In Solr you can create multiValued fields. In your case you can use it to store de-normalized values of organization structure.
In described scenario you will create multi valued field ouId (Organization Unit Id) and store employee's ouId and all parent ouIds. In other words you will save allowed ouIds into this field.
In search scenario you will use FilterQuery - fq parameter filtering by ouId of manager.
Example:
..&fq=ouId:12
where 12 is organization unit id of selected manager.
Maybe this is helpful for you https://github.com/salyh/elasticsearch-security-plugin It adds Document level security to elasticsearch.
"Currently for user based authentication and authorization Kerberos/SPNEGO and NTLM are supported through 3rd party library waffle (only on windows servers). For UNIX servers Kerberos/SPNEGO is supported through tomcat build in SPNEGO Valve (Works with any Kerberos implementation. For authorization either Active Directory and generic LDAP is supported). PKI/SSL client certificate authentication is also supported (CLIENT-CERT method). SSL/TLS is also supported without client authentication.
You can use this plugin also without Kerberos/NTLM/PKI but then only host based authentication is available.
As of now two security modules are implemented:
Actionpathfilter: Restrict actions against Elasticsearch on a coarse-grained level like who is allowed to to READ, WRITE or even ADMIN rest api calls
Document level security (dls): Restrict actions on document level like who is allowed to query for which fields within a document"
I'm building an client/server-app where I want to sync data. I'm thinking about including the largest key from the local client database in the query so the server can fetch all entities added after that entity (with key > largest_local_key).
Can I be sure that the Google App Engine always increase the ID of new entities?
Is that a good way to implement synchronization?
No, IDs do not increase monotonically.
Consider synchronizing based on a create/update timestamp.