Custom Searcher - Blending of hits from different sources - vespa

We have a need for "Blending of hits from different sources", as per your documentation it is recommended to write a custom-searcher in JAVA. Is there a demo of this written somewhere on Github ? I wouldn't even know where to start :( I understand I can create search "chains" , preferably Asynchronous, and then blend results in JAVA before returning them...but then how would I handle paginations, limits...etc ? This all seems very complicated, for someone who doesn't even know JAVA that much. So, I am hoping someone has already written a demo for this ? Please ? Anyone ?
Thank you so much
EDIT to make my quesion clearer:
We are writing a search engine that fetches data from various websites. Some websites have 10mil indexable items, other websites only 100,000. When we present the results to end user, we want to include results from all our sources ( when match applies ). Let's say 10 results from each of the websites we crawl, so that they all get equal amount of attention on page. If we don't do custom blending, what happens is that the largest website with most items wins all our traffic.
I understand that we can send 10 separate queries to VESPA, and blend the results in our front end, but that seems very inefficient. Thus, the quesion of "Custome Searcher". Thank you so much !

That documentation covers some very advanced use cases which you do not have. Are your sources different Vespa schemas or content clusters? If so Vespa will by default blend the hits returned from each according to their relevance scores so there's nothing you need to do.
The two other most common use-cases are:
Some (or all) the data sources are external, so you need to write a Searcher component to fetch the external data and turn it into a Result.
You want the data to be blended in some custom way (rather than by relevance score). If so you need to exclude the default blending Searcher (com.yahoo.prelude.searcher.BlendingSearcher) and write your own.
If you provide some more information about your use cases I can give you some code examples.
EDIT: Use grouping to solve the need explained under "EDIT" in the question:
Create a "siteid" field when feeding (e.g in document processing).
Use the grouping expression all(group(siteid) each(max(10) output(summary())))
See http://docs.vespa.ai/documentation/grouping.html

Related

Scopus DOIs not working in article retrieval APIs

I’m trying to use the Elsapy module to extract the abstracts of documents on certain topics.
I am able to do this but, unfortunately, only for a fraction of the documents found.
For example, a particular search returns 16 documents but I am only able to extract the information (e.g. abstracts) from 4 of them.
Upon further inspection, it seems that for the documents I can’t get the abstracts of:
-Don’t have a PII
-And have DOIs that don’t work.
I have tested the DOIs in the article retrieval interactive API guide
-The ones that returned abstracts worked fine
-The other ones return the error:
RESOURCE_NOT_FOUNDThe resource specified cannot be found.
Even though I have found the original articles and checked their DOI is correct.
An example of one that didn’t work is:
Sengupta, N. K., & Sibley, C. G. (2019). The political attitudes and subjective wellbeing of the one percent. Journal of Happiness Studies, 20(7), 2125-2140. doi:10.1007/s10902-018-0038-4
I have found that the ones that do ‘work’ all have the general form:
10.1016/j.ssmph.2019.100471
10.1016/j.apacoust.2015.03.004
Please let me know if you know why this is and how I can fix it.
Thanks for your help :)
The Article Retrieval API works for Elsevier content hosted on sciencedirect.com; all Elsevier articles have PII identifiers. The example DOI 10.1007/s10902-018-0038-4 does not work because it is published by Springer and, consequently, not available on ScienceDirect.
Kindly note that this is not a bug and everything is working as expected.

Any examples of using a Wandsearcher in vespa ? (After a weighted set query)

Currently i am using the REST interface to query vespa, which seems to work great but something tells me that i should be using searchers in the application to make the client(server side code) a bit lighter (bundle the jar file in the application package) to make it a bit smoother. I have managed to do some simple searcher/processor applications. But this is a bit overwhelming.
So are there any readily available examples ?
Basicially i want to:
Send to /search?query=someId
Do a ordinary search for the weighted set on this documentID (I guess this one can be handy: https://docs.vespa.ai/documentation/reference/inspecting-structured-data.html)
Take those items in the response and add it to a wand item(s) and query for a wand with wandsearcher on a given field. Similar to the yql:
"select * from sources * where wand(interest, some weightedsets));","ranking":"combined_score" and return the matches.
Just curious also, apart from the trouble of string building with the http request i am doing at the moment are there any performance gains of using a searcher or go the java route vs rest?
thanks for any insight or code help i can start with.
There is an example of using the WandItem (YQL wand)here https://docs.vespa.ai/documentation/advanced-ranking.html and see also https://docs.vespa.ai/documentation/using-wand-with-vespa.html as there are two wand implementations available in Vespa, it sounds from the description that the wand() is what you want to use for this use case. For the first call you probably want to have a dedicated document summary to reduce the amount of data fetched for your first query and also the option of serving it out of memory only (See https://docs.vespa.ai/documentation/document-summaries.html)
Also see https://docs.vespa.ai/documentation/searcher-development.html as a general resource on writing searchers.
For your use case it makes a lot of sense to write a searcher to perform these two queries as your second query depends on the first and you avoid the cost of rendering/http/yql parsing which might matter if your client is remote with high network latency.

Seamless Integration with REST API

Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.

look up input field webbrowser C language

This is in C Language
I want to know how i can write a program to lookup all the input fields of a website. Any website. and then can fill them in. I can write the simple webbrowser in vbs but how can i analyse the input fields. even better would be is i could click the lookup field and it puts the name of it in a box..... that would be ideal.
Anyone can help? thanks :)
Are you sure you want to do this in C?
I ask because it is not easy. First of all, you need to be able to run the HTTP GET request against the webpage you wish to view. For this, you probably need libcurl; you definitely don't want to be writing from scratch at any rate.
Next, you need to process the html you get, finding all input fields. You do NOT want to do this using regular expressions, if anything for the sake of bobince's blood pressure. HTML is not a regular language is the bit you need to take away - you need an xml parser. Enter libxml. I'm sure there are other xml libraries out there, and even libraries for parsing html.
Finally, having done that (got the fields etc) you need to be able to populate them and submit the correct request as per the ACTION and METHOD parameters of the FORM.
This is of course assuming you know what the fields should be formatted with. And it also assumes nothing else is going on. If you have a javascript validated web form (I sincerely hope they're validating on the request too, but they might provide feedback via JS) you won't benefit from that (unless you're going to integrate JS, in which case you might as well write a browser).
This is not a trivial task and it is the reason there are accessibility standards for HTML, because otherwise it becomes tricky to interpret the form without human interaction.
Of course, this all assumes said html is well formed, which isn't always the case...
I might suggest another approach. BeautifulSoup is a well known Python web scraping library that works very well. Python as a language allows easier string manipulation too, which will dramatically cut down your development time. I'd suggest giving the need to use C some serious thought given the size and complexity of the task you want to undertake vs your need to get a result quickly. If you have a lot of time, by all means go for C.

How to model Data Transfer Objects for different front ends?

I've run into reoccuring problem for which I haven't found any good examples or patterns.
I have one core service that performs all heavy datasbase operations and that sends results to different front ends (html, silverlight/flash, web services etc).
One of the service operation is "GetDocuments", which provides a list of documents based on different filter criterias. If I only had one front-end, I would like to package the result in a list of Document DTOs (Data transfer objects) that just contains the data. However, different front-ends needs different amounts of "metadata". The simples client just needs the document headline and a link reference. Other clients wants a short text snippet of the document, another one also wants a thumbnail and a third wants the name of the author. Its basically all up to the implementation of the GUI what needs to be displayed.
Whats the best way to model this:
As a lot of different DTOs (Document, DocumentWithThumbnail, DocumentWithTextSnippet)
tends to become a lot of classes
As one DTO containing all the data, where the client choose what to display
Lots of unnecessary data sent
As one DTO where certain fields are populated based on what the client requested
Tends to become a very large class that needs to be extended over time
One DTO but with some kind of generic "Metadata" field containing requested metadata.
Or are there other options?
Since I want a high performance service, I need to think about both network load and caching strategies.
Does anyone have any good patterns or practices that might help me?
What I would do is give the front end the ability to request the presence of the wanted metadata ( say getDocument( WITH_THUMBNAILS | WITH_TEXT_SNIPPET ) )
Then this DTO is built with only this requested information.
Adding all the possible metadata is as you said, unacceptable.
I will surely stay with one class defining all the possible methods (getTitle(), getThumbnail()) and if possible it will return a placeholder when the thumbnail was not requested. Something like "Image not available".
If you want to model this like a pattern, take a look at the factory patterns.
Hope this helps you.
Is there any noticable cost to creating a DTO that has all the data any of your views could need and using it everywhere? I would do that, especially since it insulates you from a requirement change down the line to have one of the views incorporate data one of the other views uses
ex. Maybe your silverlight/flash view doesn't show the title itself b/c it's in the thumb now, but they decide they want to sort by it later.
To clarify, I do not necesarily think you need to pass down all of the data every time, but I think your DTO class should define all of them. Just don't fall into the pits of premature optimization or analysis paralysis. Do the simplest thing first, then justify added complexity. Throw it all in and profile it. If the perf is unacceptable, optimize and try again.

Resources