I have a question regarding to Index batch operation descripted here: https://learn.microsoft.com/en-us/azure/search/search-import-data-dotnet
In the sample as:
Sample in the document
These 3 items are different documents identified by field of hotel Id.
My question is:
what will happen if multiple actions against the same document (like specify the same hotel Id in the example) included in the array?
How the index batch operation handle the ordering for the multiple actions against a same document?
I understand for upload it makes sense to ensure distinct document Ids in the operation list, while this does not apply for merge.
Thanks in advance!
Tony,
Azure Search provides no guarantees about the order of operations in an index batch operation. They are all executed independently, so it's possible to have a partial success. Please see this link for more information about the response codes you can receive from this operation. I would avoid including multiple operations against the same document in a single batch.
Matt
Based on my understanding for your question, I think you want to know more details about the concurrency of Index batch operation on the same document. Just per my experience, there are some thought shared for you as below.
Azure only opens its documents of REST APIs for different services and the related SDK sources for different programming languages, but the function-realizing mechanism of these services which we don't know directly.
Azure SDKs almost wrap Azure REST APIs using different programming languages. So we can speculate some mechanism via researching the usage of REST APIs.
The index batch operation is based on the REST API Add, Update or Delete Documents (Azure Search Service REST API), which handles multiple actions described in a single HTTP request that do these one by one. For multiple actions against the same document included in an array, there is no conflict in a single request because of synchronization.
If concurrently do many requests for the same document, there is some errors defined in the service which be introduced in the REST API document as below and will do necessary retryable operation.
Hope it helps.
Related
My organization has multiple databases that we need to provide search results for. Right now you have to search each database individually. I'm trying to create a web interface that will query all the databases at once and sort the results based upon relevance.
Some of the databases I have direct access to. Others I can only access via a REST API.
My challenge isn't knowing how to query each individual database. I understand how to make API calls. It's how to sort the results by relevance.
On the surface it looks like Elasticsearch would be a good option. Its reverse indexing system seems like a good solution to figuring out which results are going to be the most relevant to our users. It's also super fast.
The problem is that I don't see a way (so far) to include results from an external API into Elasticsearch so it can do its magic.
Is there a better option that I'm not aware of? Or is it possible to have Elasticsearch evaluate the relevance of results from an external API while also including data from its own internal indices?
I did find an answer, although nobody replied. :\
The answer is to use the http_poll plugin with logstash. This will make an API call and injest the results into Elasticsearch.
Another option could be some form of microservices orchestration for the various API calls then merge them into a final result set.
I was wondering how so many job sites have so many job offers/information regarding other companies' offers. For instance, if I were to start my own job searching engine, how would I be able to get the information that sites like indeed.com have in my own databases? One site (jobmaps.us) says that it's "powered by indeed" and seems to be following the same format as indeed.com (as do all other job searching websites). Is there some universal job searching template that I can use?
Thanks in advance.
Some services offer an API which allows you to "federate" searches (relay them to multiple data sources, then gather all the results together for display in one place). Alternatively, some offer a mechanism that would allow you to download/retrieve data, so you can load it into your own search index.
The latter approach is usually faster and gives you total control but requires you to maintain a search index and track when data items are updated/added/deleted on remote systems. That's not always trivial.
In either case, some APIs will be open/free and some will require registration and/or a license. Most will have rate limits. It's all down to whoever owns the data.
It's possible to emulate a user browsing a website, sending HTTP requests and analysing the response from a web server. By knowing the structure of the HTML, it's possible to extract ("scrape") the information you need.
This approach is often against site policies and is likely to get you blocked. If you do go for this approach, ensure that you respect any robots.txt policies to avoid being blacklisted.
How does Zapier/IFTTT implement the triggers and actions for different API providers? Is there any generic approach to do that, or they are implemented by individual?
I think the implementation is based on REST/Oauth, that is generic from high level to see. But for Zapier/IFTTT, it defines a lot of trigger conditions, filters. These conditions, filters should be specific to different provider. Is the corresponding implementation in individual or in generic? If in individual, there must be a vast labor force. If in generic, how to do that?
Zapier developer here - the short answer is, we implement each one!
While standards like OAuth make it easier to reuse some of the code from one API to another, there is no getting around the fact that each API has unique endpoints and unique requirements. What works for one API will not necessarily work for another. Internally, we have abstracted away as much of the process as we can into reusable bits, but there is always some work involved to add a new API.
PipeThru developer here...
There are common elements to each API which can be re-used, such as OAuth authentication, common data formats (JSON, XML, etc). Most APIs strive for a RESTful implementation. However, theory meets reality and most APIs are all over the place.
Each services offers its own endpoints and there are no commonly agreed upon set of endpoints that are correct for given services. For example, within CRM software, its not clear how a person, notes on said person, corresponding phone numbers, addresses, as well as activities should be represented. Do you provide one endpoint or several? How do you update each? Do you provide tangential records (like the company for the person) with the record or not? Each requires specific knowledge of that service as well as some data normalization.
Most of the triggers involve checking for a new record (unique id), or an updated field, most usually the last update timestamp. Most services present their timestamps in ISO 8601 format which makes parsing timestamp easy, but not everyone. Dropbox actually provides a delta API endpoint to which you can present a hash value and Dropbox will send you everything new/changed from that point. I love to see delta and/or activity endpoints in more APIs.
Bottom line, integrating each individual service does require a good amount of effort and testing.
I will point out that Zapier did implement an API for other companies to plug into their tool. Instead of Zapier implementing your API and Zapier polling you for data, you can send new/updated data to Zapier to trigger one of their Zaps. I like to think of this like webhooks on crack. This allows Zapier to support many more services without having to program each one.
I've implemented a few APIs on Zapier, so I think I can provide at least a partial answer here. If not using webhooks, Zapier will examine the API response from a service for the field with the shortest name that also includes the string "id". Changes to this field cause Zapier to trigger a task. This is based off the assumption that an id is usually incremental or random.
I've had to work around this by shifting the id value to another field and writing different values to id when it was failing to trigger, or triggering too frequently (dividing by 10 and then writing id can reduce the trigger sensitivity, for example). Ambiguity is also a problem, for example in an API response that contains fields like post_id and mesg_id.
Short answer is that the system makes an educated guess, but to get it working reliably for a specific service, you should be quite specific in your code regarding what constitutes a trigger event.
I'm working on a cloud-based line of business application. Users can upload documents and other types of object to the application. Users upload quite a number of documents and together there are several million docs stored. I use SQL Server.
Today I have a somewhat-restful-API which allow users to pass in a DocumentSearchQuery entity where they supply keyword together with request sort order and paging info. They get a DocumentSearchResult back which is essentially a sorted collection of references to the actual documents.
I now want to extend the search API to other entity types than documents, and I'm looking into using OData for this. But I get the impression that if I use OData, I will face several problems:
There's no built-in limit on what fields users can query which means that either the perf will depend on if they query a indexed field or not, or I will have to implement my own parsing of incoming OData requests to ensure they only query indexed fields. (Since it's a multi-tenant application and they share physical hardware, slow queries are not really acceptable since those affect other customers)
Whatever I use to access data in the backend needs to support IQueryable. I'm currently using Entity Framework which does this, but i will probably use something else in the future. Which means it's likely that I need to do my own parsing of incoming queries again.
There's no built-in support for limiting what data users can access. I need to validate incoming Odata queries to make sure they access data they actually have permission to access.
I don't think I want to go down the road of manually parsing incoming expression trees to make sure they only try to access data which they have access to. This seems cumbersome.
My question is: Considering the above, is using OData a suitable protocol in a multi-tenant environment where customers write their own clients accessing the entities?
I think it is suitable here. Let me give you some opinions about the problems you think you will face:
There's no built-in limit on what fields users can query which means
that either the perf will depend on if they query a indexed field or
not, or I will have to implement my own parsing of incoming OData
requests to ensure they only query indexed fields. (Since it's a
multi-tenant application and they share physical hardware, slow
queries are not really acceptable since those affect other customers)
True. However you can check for allowed fields in the filter to allow the operation or deny it.
Whatever I use to access data in the backend needs to support
IQueryable. I'm currently using Entity Framework which does this, but
i will probably use something else in the future. Which means it's
likely that I need to do my own parsing of incoming queries again.
Yes, there is a provider for EF. That means if you use something else in the future you will need to write your own provider. If you change EF probably you took a decision to early. I don´t recommend WCF DS in that case.
There's no built-in support for limiting what data users can access. I
need to validate incoming Odata queries to make sure they access data
they actually have permission to access.
There isn´t any out-of-the-box support to do that with WCF Data Services, right. However that is part of the authorization mechanism that you will need to implement anyway. But I have good news for you: do it is pretty easy with QueryInterceptors. simply intercepting the query and, based on the user privileges. This is something you will have to implement it independently the technology you use.
My answer: Considering the above, WCF Data Services is a suitable protocol in a multi-tenant environment where customers write their own clients accessing the entities at least you change EF. And you should have in mind the huge effort it saves to you.
I'd like to have a single instance of Solr, protected by some sort of authentication, that operated against different indexes based on the credentials used for that authentication. The type of authentication is flexible, although I'd prefer to work with open standards (existing or emerging), if possible.
The core problem I'm attempting to solve is that different users of the application (potentially) have access to different data stored in it, and a user should not be able to search over inaccessible data. Building an index for each user seems the easiest way to guarantee that one user doesn't see forbidden data. Is there, perhaps, an easier way? One that would obviate the need for Solr to have a way to map users to indexes?
Thanks.
The Solr guys have a pretty exhaustive overview of what is possible, see http://wiki.apache.org/solr/MultipleIndexes