I'm trying to create a Drupal 7 SPARQL view using my own store. I have the triples loaded in Virtuoso and in Fuseki but can't seem to make the final connection feeding a View. I am able to query the data successfully from the Fuseki and Virtuoso SPARQL UIs, just can't figure out how to define the Endpoint URI and Dataset so they combine properly to form the SPARQL query that Drupal can use. For instance, it seems that Fuseki embeds the default dataset in the SPARQL endpoint URI (i.e., http://myhostname:3030/default/sparql) while Virtuoso doesn't.
Example data:
#prefix e1: <http://example.com/source/work/dataset/gov/vocab/enhancement/1/> .
e1:author a rdf:Property ;
ov:csvCol "8"^^xsd:integer ;
ov:csvHeader "author" ;
conversion:enhancement_layer "1" ;
dcterms:isReferencedBy <http://example.com/source/work/dataset/gov/version/2011-Aug-18/conversion/enhancement/1> ;
rdfs:label "author" ;
rdfs:range rdfs:Literal ;
conversion:enhances raw:author .
The dataset field probably isn't what you want to use for this. You should put the full URI of the endpoint into the Endpoint URI field.
Feel free to post any other issues on the SPARQL Views issue queue, I don't spend much time on stackoverflow.
Related
Is there anyway of uploading Custom Slot Types values via an API, so that you do not need to type them in manually using the new Alexa Skill Builder interface (if you have many of them):
I haven't found anything.
My recommendation is to get the model via SMAPI first, edit the json file with your new values and update it via SMAPI again.
ask api get-model -s "enter your skill id here" --stage development -l en-US > model.json
in the model.json file you can see the slots definition. Change it (with a script or manually) and update the model again
Reference to both commands:
https://developer.amazon.com/docs/smapi/ask-cli-command-reference.html#update-model-subcommand
https://developer.amazon.com/docs/smapi/ask-cli-command-reference.html#update-model-subcommand
It seems not (after searching). There is a feature request logged here with Amazon:
https://forums.developer.amazon.com/questions/9640/api-to-upload-intent-schema-and-sample-utterances.html#answer-77902
Yes you can do this via the SMAPI API/CLI. Take a look at https://developer.amazon.com/docs/smapi/ask-cli-intro.html for a full detail - it allows for full model editing via JSON.
In the left bar bellow the "Intents" and "Slot" is the option "JSON Editor". There you can write a JSON for the new intents you want to add.
Example without slot type
{
"name":"YesIntent",
"samples":[
"Yes",
"Yeah",
"I do",
"I am"
]
}
Example with slot type
{
"name":"NumberIntent",
"slots":[
{
"name":"number",
"type":"AMAZON.NUMBER"
}
],
"samples":[
"{number} is my number",
"{number}",
"my number is {number}"
]
}
As other answers suggest, you can use the SMAPI.
Alternatively, you can select the "code editor" tab on the left and drag/drop or copy/paste your schema json code.
https://github.com/williamwdu/Alexa-Custom-Slot-Generator
I wrote this to convert csv/excel to JSON format so you can paste it into code editor.
Let me know if you have any question.
There is no README for the code coz I have no time these days
I have the datastore as follows,
class Data(db.Model):
project = db.StringProperty()
project_languages = db.ListProperty(str,default=[])
When user inputs a language (input_language), I want to output all the projects which contains the language user mentioned in it's language list (project_languages).
I tried to do it in the below way but got an error saying,
BadQueryError: Parse Error: Invalid WHERE Condition
db.GqlQuery("SELECT * FROM Data WHERE input_language IN project_languages")
What should be my query, if I want to get the data in the above mentioned way?
Not sure if you are using python for the job.. If so I highly recommend you use the ndb library for datastore queries. The solution is easy as Data.query(A.IN(B))
In OWL API, classes may have data properties. For e.g. a class may have a date property hasCommonName "Something". In OWL API, is there any facility like the SQL like which allows querying for classes that hasCommonName containing the word "Some", just like SQL like behave
You can use regular expressions to identify the things you need. Consider the following knowledge base:
DataProperty: hasCommonName
Individual: foo
Facts:
hasCommonName "Something"
You can retrieve the individual foo by using the following class expression: hasCommonName some string[pattern "Some.*"]. The string[pattern "Some.*"] specifies the pattern to be matched. Warning, currently not supported by all reasoners (works for Hermit 1.3.7)
You need to look at some SPARQL tutorials. You can write something like this:
SELECT * WHERE
{
?pizza rdfs:subClassof [
owl:onProperty :hasTopping;
owl:someValuesFrom :TomatoTopping ] .
}
Basically, you need to define the correct predicate based on your restriction.
I have several entities that I am searching across that include dates, and the Search API works great across all of them except for one thing - sorting.
Here's the data model for one of my entities (simplified of course):
class DepositReceipt(ndb.Expando):
#Sets creation date
creation_date = ndb.DateTimeProperty(auto_now_add=True)
And the code to create the search.Document where de is an instance of the entity:
document = search.Document(doc_id=de.key.urlsafe(),
fields=[search.TextField(name='deposit_key', value=de.key.urlsafe()),
search.DateField(name='created', value=de.creation_date),
search.TextField(name='settings', value=de.settings.urlsafe()),
])
This returns a valid document.
And finally the problem line. I took this snippet from the official GAE Search API tutorial and just changed the direction of the sort to DESCENDING and changed the search expression to created (the date property from the Document above).
expr_list = [search.SortExpression(
expression="created", default_value='',
direction=search.SortExpression.DESCENDING)]
I don't think this is important, but the rest of the search code looks like this:
sort_opts = search.SortOptions(expressions=expr_list)
query_options = search.QueryOptions(
cursor=query_cursor,
limit=_NUM_RESULTS,
sort_options=sort_opts)
query_obj = search.Query(query_string=query, options=query_options)
search_results = search.Index(name=index_name).search(query=query_obj)
In production, I get this error message:
InvalidRequest: Failed to parse search request "settings:ag5zfmdoaWRvbmF0aW9uc3IQCxIIU2V0dGluZ3MYmewDDA"; failed to parse date
Changing the expression="created" to anything else works perfectly fine. This also happens across my other entity types that use dates, so I have no idea what's going on. Advice?
I think default_value needs to be a valid date, rather than '' as you have it.
There where indications in the GoogleIO talk on Search API that we can do searches based on geolocation.
I can't find an appropriate field to store location info.
How can I store geolocation info in the document so I could issue queries based on distance from a particular GPS location?
On June 28, 2012, Google integrated the GeoPoint class into the Google App Engine Search API library with the specific intent of making spatial points searchable.
GeoPoints are stored as GeoFields within the Search Document. Google provides this support documentation outlining the use of the GeoPoint with the Search API.
The following example declares a GeoPoint and assigns it to a GeoField in a Search Document. These new classes provide a lot more functionality than what is listed below, but this code is a starting point for a basic understanding of how to use the new spatial search functionality..
Constructing a document with an associated GeoPoint
## IMPORTS ##
from google.appengine.api import search
def CreateDocument(content, lat, long):
geopoint = search.GeoPoint(lat, long)
return search.Document(
fields=[
search.HtmlField(name='content', value=content),
search.DateField(name='date', value=datetime.now().date())
search.GeoField(name='location', value=geopoint)
])
Searching the GeoPoint document field (Slightly modified from the Search API docs)
## IMPORTS ##
from google.appengine.api import search
ndx = search.Index(DOCUMENT_INDEX)
loc = (-33.857, 151.215)
query = "distance(location, geopoint(-33.857, 151.215)) < 4500"
loc_expr = "distance(location, geopoint(-33.857, 151.215))"
sortexpr = search.SortExpression(
expression=loc_expr,
direction=search.SortExpression.ASCENDING, default_value=4501)
search_query = search.Query(
query_string=query,
options=search.QueryOptions(
sort_options=search.SortOptions(expressions=[sortexpr])))
results = index.search(search_query)