Snippet creation in Vespa - vespa

YQL (simplified):
select * from sources * where language contains "de" and description contains "computer";
Result (simplified):
{
"root": {
...
"children": [
{
"id": "id:post:post::123",
"relevance": 0,
"source": "content",
"fields": {
"sddocname": "post",
"description": "<sep /> coffee machine <hi>de</hi> longhi contains a <hi>computer</hi> <sep />",
}
}
]
}
}
How to tell Vespa to create the snippets from "computer" but not from "de"?

Use the "filter" annotation, https://docs.vespa.ai/documentation/reference/query-language-reference.html. Terms with filter: true will not be highlighted.
select * from sources * where language contains ([{"filter":true}]"de") and description contains "computer";

Related

Is it possible to get key value pairs from snowflake api instead rowType?

I'm working with an API from snowflake and to deal with the json data, I would need to receive data as key-value paired instead of rowType.
I've been searching for results but haven't found any
e.g. A table user with name and email attributes
Name
Email
Kelly
kelly#email.com
Fisher
fisher#email.com
I would request this body:
{
"statement": "SELECT * FROM user",
"timeout": 60,
"database": "DEV",
"schema": "PLACE",
"warehouse": "WH",
"role": "DEV_READER",
"bindings": {
"1": {
"type": "FIXED",
"value": "123"
}
}
}
The results would come like:
{
"resultSetMetaData": {
...
"rowType": [
{ "name": "Name",
...},
{ "name": "Email",
...}
],
},
"data": [
[
"Kelly",
"kelly#email.com"
],
[
"Fisher",
"fisher#email.com"
]
]
}
And the results needed would be:
{
"resultSetMetaData": {
...
"data": [
[
"Name":"Kelly",
"Email":"kelly#email.com"
],
[
"Name":"Fisher",
"Email":"fisher#email.com"
]
]
}
Thank you for any inputs
The output is not valid JSON, but the return can arrive in a slightly different format:
{
"resultSetMetaData": {
...
"data":
[
{
"Name": "Kelly",
"Email": "kelly#email.com"
},
{
"Name": "Fisher",
"Email": "fisher#email.com"
}
]
}
}
To get the API to send it that way, you can change the SQL from select * to:
select object_construct(*) as KVP from "USER";
You can also specify the names of the keys using:
select object_construct('NAME', "NAME", 'EMAIL', EMAIL) from "USER";
The object_construct function takes an arbitrary number of parameters, as long as they're even, so:
object_construct('KEY1', VALUE1, 'KEY2', VALUE2, <'KEY_N'>, <VALUE_N>)

Apache Nifi: Parse data with UpdateRecord Processor

I'm trying to parse some data in Nifi (1.7.1) using UpdateRecord Processor.
Original data are json files, that I would like to convert to Avro, based on a schema.
The Avro conversion is ok, but in that convertion I also need to parse one array element from the json data to a different structure in Avro.
This is a sample data of the input json:
{ "geometry" : {
"coordinates" : [ [ 4.963087975800593, 45.76365595859971 ], [ 4.962874487781098, 45.76320922779652 ], [ 4.962815443439148, 45.763116079159374 ], [ 4.962744732112515, 45.763010484202866 ], [ 4.962096825239138, 45.762112721939246 ] ]} ...}
Being its schema (specified in RecordReader):
{ "type": "record",
"name": "features",
"fields": [
{
"name": "geometry",
"type": {
"type": "record",
"name": "geometry",
"fields": [
{
"name": "coordinatesJson",
"type": {
"type": "array",
"items": {
"type": "array",
"items": "double"
}
}
},
]
}
},
....
]
}
As you can see, coordinates is an array of arrays.
And I need to parse those data to Avro, based on this schema (specified in RecordWriter):
{
"name": "outputdata",
"type": "record",
"fields": [
{"name": "coordinatesAvro",
"type": {
"type": "array",
"items" : {
"type" : "record",
"name" : "coordinatesAvro",
"fields" : [ {
"name" : "X",
"type" : "double"
}, {
"name" : "Y",
"type" : "double"
} ]
}
}
},
.....
]
}
The problem here is that I'm not being able to parse from coordinatesJson to coordinatesAvro, using RecordPath functions
I tried several mappings, like:
Property: Value:
/coordinatesJson[0..-1]/X /geometry/coordinatesAvro[*][0]
/coordinatesJson[0..-1]/Y /geometry/coordinatesAvro[*][1]
It should be a pretty straighforward parsing step, but as I said, I've been going in circles to achive this for a while.
Any help would be really appreciated.
When I collide with something like that I do next:
1) Transofrm Json into Json with strcuture that I need (for example in your case: coordinatesAvro) by ExecuteScript Processor. I have used ECMAScript cause you can simple parse JSON and work with objects (transform them).
2) ConvertJsonToAvro with one common schema (coordinatesAvro in your case) for Reader and Writer.
It works very good and I have used it on BigData cases. This is one of possible resolutions for your problem.

How to include imported fields in the search results?

I'm using document references to import parent fields into a child document. While searches against the parent fields work, the parent fields themselves do not seem to be included in the search results, only child fields.
To use the example in the documentation, salesperson_name does not appear in the fields entry for id:test:ad::1 when using query=John, or indeed when retrieving id:test:ad::1 via GET directly.
Here's a simplified configuration for my document model:
search definitions
person.sd - the parent
search person {
document person {
field name type string {
indexing: summary | attribute
}
}
fieldset default {
fields: name
}
}
event.sd - the child
search event {
document event {
field code type string {
indexing: summary | attribute
}
field speaker type reference<person> {
indexing: summary | attribute
}
}
import field speaker.name as name {}
fieldset default {
fields: code
}
}
documents
p1 - person
{
"fields": {
"name": "p1"
}
}
e1 - event
{
"fields": {
"code": "e1",
"speaker": "id:n1:person::1"
}
}
query result
curl -s "http://localhost:8080/search/?yql=select%20*%20from%20sources%20*where%20name%20contains%20%22p1%22%3B" | python -m json.tool
This returns both e1 and p1, as you would expect, given that name is present in both. But the fields of e1 do not include the name.
{
"root": {
"children": [
{
"fields": {
"documentid": "id:n1:person::1",
"name": "p1",
"sddocname": "person"
},
"id": "id:n1:person::1",
"relevance": 0.0017429193899782135,
"source": "music"
},
{
"fields": {
"code": "e1",
"documentid": "id:n1:event::1",
"sddocname": "event",
"speaker": "id:n1:person::1"
},
"id": "id:n1:event::1",
"relevance": 0.0017429193899782135,
"source": "music"
}
],
...
"fields": {
"totalCount": 2
},
}
}
Currently you'll need to add the imported 'name' into the default summary by
import field speaker.name as name {}
document-summary default {
summary name type string{}
}
More about explicit document summaries in http://docs.vespa.ai/documentation/document-summaries.html
The result of your query will then return
"children": [
{
"fields": {
"documentid": "id:n1:person::1",
"name": "p1",
"sddocname": "person"
},
"id": "id:n1:person::1",
"relevance": 0.0017429193899782135,
"source": "stuff"
},
{
"fields": {
"code": "e1",
"documentid": "id:n1:event::1",
"name": "p1",
"sddocname": "event",
"speaker": "id:n1:person::1"
},
"id": "id:n1:event::1",
"relevance": 0.0017429193899782135,
"source": "stuff"
}
],
We'll improve the documentation on this. Thanks for the very detailed write-up.
Add "summary" to the indexing statement of the imported field in the parent document type.
E.g in the documentation example change the "name" field in the "salesperson" document type to say "indexing: attribute | summary".

NoSQL Structure for handling labeled tags

Currently I have a hundreds of thousands of files like so:
{
"_id": "1234567890",
"type": "file",
"name": "Demo File",
"file_type": "application/pdf",
"size": "1400",
"timestamp": "1491421149",
"folder_id": "root"
}
Currently, I index all the names, and a client can search for files based on the name of the file. These files also have tags that need to be associated with the file but they also have specific labels.
An example would be:
{
"tags": [
{ "client": "john doe" },
{ "office": "virginia" },
{ "ssn": "1234" }
]
}
Is adding the tags array to my above file object the ideal solution if I want to be able to search thousands of files with a client of John Doe?
The only other solution I can think of is having something an object per tag and having an array of file ID's associated with each tag like so:
{
"_id": "11111111",
"type": "tag",
"label": "client",
"items": [
"1234567890",
"1222222222",
"1333333333"
]
}
With this being a LOT of objects I need to add tags to, I'd rather do it the most efficient way possible FIRST so I don't have to backtrack in the near future when I start running into issues.
Any guidance would be greatly appreciated.
Your original design, with a tags array, works well with Cloudant Search: https://console.ng.bluemix.net/docs/services/Cloudant/api/search.html#search.
With this approach you would define a single design document that will index any tag in the tags array. You do not have to create different views for different tags and you can use the Lucene syntax for queries: http://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Overview.
So, using your example, if you have a document that looks like this with tags:
{
"_id": "1234567890",
"type": "file",
"name": "Demo File",
"file_type": "application/pdf",
"size": "1400",
"timestamp": "1491421149",
"folder_id": "root",
"tags": [
{ "client": "john doe" },
{ "office": "virginia" },
{ "ssn": "1234" }
]
}
You can create a design document that indexes each tag like so:
{
"_id": "_design/searchFiles",
"views": {},
"language": "javascript",
"indexes": {
"byTag": {
"analyzer": "standard",
"index": "function (doc) {\n if (doc.type === \"file\" && doc.tags) {\n for (var i=0; i<doc.tags.length; i++) {\n for (var name in doc.tags[i]) {\n index(name, doc.tags[i][name]);\n }\n }\n }\n}"
}
}
}
The function looks like this:
function (doc) {
if (doc.type === "file" && doc.tags) {
for (var i=0; i<doc.tags.length; i++) {
for (var name in doc.tags[i]) {
index(name, doc.tags[i][name]);
}
}
}
}
Then you would search like this:
https://your_cloudant_account.cloudant.com/your_db/_design/searchFiles/_search/byTag
?q=client:jack+OR+office:virginia
&include_docs=true
The solution, that comes into my mind would be using map reduce functions.
To do that, you would add the tags to your original document:
{
"_id": "1234567890",
"type": "file",
"name": "Demo File",
"file_type": "application/pdf",
"size": "1400",
"timestamp": "1491421149",
"folder_id": "root",
"client": "john",
...
}
Afterwards, you can create a design document, that looks like this:
{
"_id": "_design/query",
"views": {
"byClient": {
"map": "function(doc) { if(doc.client) { emit(doc.client, doc._id) }}"
}
}
}
After the view is processed, you can open it with
GET /YOURDB/_design/query/_view/byClient?key="john"
By adding the query parameter include_docs=true, the whole document will be returned, instead of the id.
You can also write your tags into an tags attribute, but you have to update the map function to match the new design.
More information about views can be found here:
http://docs.couchdb.org/en/2.0.0/api/ddoc/views.html

"There is no index available for this selector" despite the fact I made one

In my data, I have two fields that I want to use as an index together. They are sensorid (any string) and timestamp (yyyy-mm-dd hh:mm:ss).
So I made an index for these two using the Cloudant index generator. This was created successfully and it appears as a design document.
{
"index": {
"fields": [
{
"name": "sensorid",
"type": "string"
},
{
"name": "timestamp",
"type": "string"
}
]
},
"type": "text"
}
However, when I try to make the following query to find all documents with a timestamp newer than some value, I am told there is no index available for the selector:
{
"selector": {
"timestamp": {
"$gt": "2015-10-13 16:00:00"
}
},
"fields": [
"_id",
"_rev"
],
"sort": [
{
"_id": "asc"
}
]
}
What have I done wrong?
It seems to me like cloudant query only allows sorting on fields that are part of the selector.
Therefore your selector should include the _id field and look like:
"selector":{
"_id":{
"$gt":0
},
"timestamp":{
"$gt":"2015-10-13 16:00:00"
}
}
I hope this works for you!

Resources