I'm developing an application that allows users tag product purchases (via a Web App).
I intend to use the tags to automatically query DBPedia (Possible other Open Data Sources such as FreeBase).
The top N results returned from DBPEdia will be displayed to users and they will select the one that most closely resembles the tag they entered. (I will only extract specific data).
For example:
User enters tag 'iPhone' and SparSQL query sent to DBPedia. Results are parsed and some data on each result shown to user who then selects the one that most closely resembles what they bought.
I want to extract some of the data from the users selected DBpedia result and store it for marketing purposes at a later stage. (Ideally via some call to an API)
I was thinking either Bigdata or Protege OWL but have no experience of using either
Can anybody suggest the best tool for this task and advantages/disadvantages/learning curve/etc...?
Thanks
It all depends on what you want to do with the data that you've extracted. The simplest option is just to store the reconciled entity URI along with your other data in a relational database or even a NoSQL database. This lets you easily query Freebase and DBpedia for that entity later on.
If you want to pull in "everything there is to know" about an entity from Freebase and DBpedia, then you're probably better off with a triple store. With this approach, you can query all the data locally; but now you have to worry about keeping it updated.
For the kind of thing you have in mind, I don't think you necessarily need a highly scalable triplestore solution. More important seems to me that you have a toolkit for easy execution of SPARQL queries, result processing, and quick local caching of RDF data.
With those things in mind, I'd recommend having a look at OpenRDF Sesame. It's a Java toolkit and API for working with RDF and SPARQL with support for multiple storage backends. It has a few built-in stores that perform well for what you need (scaling up to about 100 million facts in a single store), and if you do find you need a bigger/better storage solution, stores like BigData or OWLIM are pretty much just drop-in replacements for Sesame's own storage backends, so you get to switch without having to make large changes to your code.
Just to give you an idea: the following lines of code use Sesame to fire a SPARQL query against DBPedia and process the result:
SPARQLRepository dbpediaEndpoint = new SPARQLRepository("http://dbpedia.org/sparql");
dbpediaEndpoint.initialize();
RepositoryConnection conn = dbpediaEndpoint.getConnection();
try {
String queryString = " SELECT ?x WHERE { ?x a foaf:Person } LIMIT 10";
TupleQuery query = conn.prepareTupleQuery(Querylanguage.SPARQL, queryString);
TupleQueryResult result = query.evaluate();
while(result.hasNext()) {
// and so on and so forth, see sesame manual/javadocs
// for details and examples
}
}
finally {
conn.close();
}
(disclosure: I work on Sesame)
Related
How is aggregation achieved with dynamodb? Mongodb and couchbase have map reduce support.
Lets say we are building a tech blog where users can post articles. And say articles can be tagged.
user
{
id : 1235,
name : "John",
...
}
article
{
id : 789,
title: "dynamodb use cases",
author : 12345 //userid
tags : ["dynamodb","aws","nosql","document database"]
}
In the user interface we want to show for the current user tags and the respective count.
How to achieve the following aggregation?
{
userid : 12,
tag_stats:{
"dynamodb" : 3,
"nosql" : 8
}
}
We will provide this data through a rest api and it will be frequently called. Like this information is shown in the app main page.
I can think of extracting all documents and doing aggregation at the application level. But I feel my read capacity units will be exhausted
Can use tools like EMR, redshift, bigquery, aws lambda. But I think these are for datawarehousing purpose.
I would like to know other and better ways of achieving the same.
How are people achieving dynamic simple queries like these having chosen dynamodb as primary data store considering cost and response time.
Long story short: Dynamo does not support this. It's not build for this use-case. It's intended for quick data access with low-latency. It simply does not support any aggregating functionality.
You have three main options:
Export DynamoDB data to Redshift or EMR Hive. Then you can execute SQL queries on a stale data. The benefit of this approach is that it consumes RCUs just once, but you will stick with outdated data.
Use DynamoDB connector for Hive and directly query DynamoDB. Again you can write arbitrary SQL queries, but in this case it will access data in DynamoDB directly. The downside is that it will consume read capacity on every query you do.
Maintain aggregated data in a separate table using DynamoDB streams. For example you can have a table UserId as a partition key and a nested map with tags and counts as an attribute. On every update in your original data DynamoDB streams will execute a Lambda function or some code on your hosts to update aggregate table. This is the most cost efficient method, but you will need to implement additional code for each new query.
Of course you can extract data at the application level and aggregate it there, but I would not recommend to do it. Unless you have a small table you will need to think about throttling, using just part of provisioned capacity (you want to consume, say, 20% of your RCUs for aggregation and not 100%), and how to distribute your work among multiple workers.
Both Redshift and Hive already know how to do this. Redshift relies on multiple worker nodes when it executes a query, while Hive is based on top of Map-Reduce. Also, both Redshift and Hive can use predefined percentage of your RCUs throughput.
Dynamodb is pure key/value storage and does not support aggregation out of the box.
If you really want to do aggregation using DynamoDB here some hints.
For you particular case lets have table named articles.
To do aggregation we need an extra table user-stats holding userId and tag_starts.
Enabled DynamoDB streams on table articles
Create a new lambda function user-stats-aggregate which is subscribed to articles DynamoDB stream and received OLD_NEW_IMAGES on every create/update/delete operation over articles table.
Lambda will perform following logic
If there is no old image, get current tags and increase by 1 every occurrence in the db for this user. (Keep in mind there could be the case there is no initial record in user-stats this user)
If there is old image see if tag was added or removed and apply change +1 or -1 depending on the case for each affected tag for received user.
Stand an API service retrieving these user stats.
Usually aggregation in DynamoDB could be done using DynamoDB streams , lambdas for doing aggregation and extra tables keeping aggregated results with different granularity.(minutes, hours, days, years ...)
This brings near realtime aggregation without need to do it on the fly per every request, you query on aggregated data.
Basic aggregation can be done using scan() and query() in lambda.
Let's say I use Elasticsearch in my application for searching restaurants near me.
I get all sorted restaurants id from Elasticsearch. And using these ids, I get all data like name, location, popular menus of restaurant from RDB.
As you can guess, it takes some time to get data from RDB. If I store all data used by application in Elasticsearch, then I can make it faster.
But I'm wondering what is the recommended way to store data in Elasticsearch and what to consider for choosing it.
I think there are some ways like below,
To store data only used for search
To store all data for search and display
Thanks!
This is a very interesting but very common question and normally every application needs to decide this, I can provide some data points which would help you to take a informed decision.
Elasticsearch is a NRT search engine and there will always be some latency when you update ES from your RDB. so some of your items which are in RDB will not be in ES and thus will not be in your search results.
Considering above, why you want to make a call again to RDB, to fetch the latest info from your RDB, on your ES search result or some other reasons like avoid fetching/storing the large data from ES ?
With every field ES provides a way to store it or not using store param or using _source enabled by default, if both are not enabled, you can't fetch the actual value, then you have to go to RDB.
RDB call to fetch the values of fields put a penalty on performance, have you benchmark it versus fetching the values directly from ES.
Every search system has its own functional and non-functional requirement and based on above points, hope you got more information, which will help you take a better decision.
This question might be relevant for any document based NoSQL database.
I'm making some interest specific social network and decided to go with DynamoDB because of scalability and no-pain-administration factors. There are only two main entities in database: users and posts.
Requirement for common queries are very simple:
Home feed (feed of people I'm following)
My/User feed (feed of mine, or specific user feed)
List of user I/user followed
List of followers
Here is a database scheme I come up with so far (legend: __thisIsHashKey and _thisIsRangeKey):
timeline = { // post
__usarname:"totocaster",
_date:"1245678901345",
record_type:"collection",
items: ["2d931510-d99f-494a-8c67-87feb05e1594","2d931510-d99f-494a-8c67-87feb05e1594","2d931510-d99f-494a-8c67-87feb05e1594","2d931510-d99f-494a-8c67-87feb05e1594","2d931510-d99f-494a-8c67-87feb05e1594"],
number_of_likes:123,
description:"Hello, this is cool"
}
timeline = { // new follower
__usarname:"totocaster",
_date:"1245678901345",
type:"follow",
follower:"tamuna123"
}
timeline = { // new like
__usarname:"totocaster",
_date:"1245678901345",
record_type:"like",
liker:"tamuna123",
like_date:"123255634567456"
}
users = {
__username:"totocaster",
avatar_url:"2d931510-d99f-494a-8c67-87feb05e1594",
followers:["don_gio","tamuna123","barbie","mikecsharp","bassman"],
following:["tamuna123","barbie","mikecsharp"],
likes:[
{
username:'barbie',
date:"123255634567456"
},
{
username:"mikecsharp",
date:"123255634567456"
}],
full_name:"Toto Tvalavadze",
password:"Hashed Key",
email:"totocaster#myemailprovider.com"
}
As you can see I came-up storing all my post directly in timeline collection. This way I can query for posts using date and username (hash and range keys). Everything seems fine, but here is the problem:
I can not query for User-Timeline in one go. This will be one of the most demanded queries by system and I can not provide efficient way to do this. Please help. Thanks.
I happen to work with news feeds daily. (Author of Stream-Framework and founded getstream.io)
The most common solutions I see are:
Cassandra (Instagram)
Redis (expensive, but easy)
MongoDB
DynamoDB
RocksDB (Linkedin)
Most people use either fanout on write or fanout on read. This makes it easier to build a working solution, but it can get expensive quickly. Your best bet is to use a combination of those 2 approaches. So do a fanout on write in most cases, but for very popular feeds keep them in memory.
Stream-Framework is open source and supports Cassandra/Redis & Python
getstream.io is a hosted solution build on top of Go & Rocksdb.
If you do end up using DynamoDB be sure to setup the right partition key:
https://shinesolutions.com/2016/06/27/a-deep-dive-into-dynamodb-partitions/
Also note that a Redis or DynamoDB based solution will get expensive pretty quickly. You'll get the lowest cost per user by leveraging Cassandra or RocksDB.
I would check out the Titan graph database (http://thinkaurelius.github.com/titan/) and Neo4j (http://www.neo4j.org/).
I know Titan claims to scale pretty well with large data sets.
Ultimately I think your model maps well to a graph. Users and posts would be nodes, and then you can connect them arbitrarily via edges. A user (node) is a friend (edge) of another user (node).
A user (node) has many posts (nodes) in their timeline. Then you can run interesting traversals via the graph.
You can also use Amazon Neptune (https://aws.amazon.com/neptune/) (Graph DB) which is well suited for social network. I don't think DynomoDB would be a good choice for yours use cases.
I am making a mobile iOS app. A user can create an account, and upload strings. It will be like twitter, you can follow people, have profile pictures etc. I cannot estimate the user base, but if the app takes off, the total dataset may be fairly large.
I am storing the actual objects on Amazon S3, and the keys on a DataBase, listing Amazon S3 keys is slow. So which would be better for storing keys?
This is my knowledge of SimpleDB and DynamoDB:
SimpleDB:
Cheap
Performs well
Designed for small/medium datasets
Can query using select expressions
DynamoDB:
Costly
Extremely scalable
Performs great; millisecond response
Cannot query
These points are correct to my understanding, DynamoDB is more about killer. speed and scalability, SimpleDB is more about querying and price (still delivering good performance). But if you look at it this way, which will be faster, downloading ALL keys from DynamoDB, or doing a select query with SimpleDB... hard right? One is using a blazing fast database to download a lot (and then we have to match them), and the other is using a reasonably good-performance database to query and download the few correct objects. So, which is faster:
DynamoDB downloading everything and matching OR SimpleDB querying and downloading that
(NOTE: Matching just means using -rangeOfString and string comparison, nothing power consuming or non-time efficient or anything server side)
My S3 keys will use this format for every type of object
accountUsername:typeOfObject:randomGeneratedKey
E.g. If you are referencing to an account object
Rohan:Account:shd83SHD93028rF
Or a profile picture:
Rohan:ProfilePic:Nck83S348DD93028rF37849SNDh
I have the randomly generated key for uniqueness, it does not refer to anything, it is simply there so that keys are not repeated therefore overlapping two objects.
In my app, I can either choose SimpleDB or DynamoDB, so here are the two options:
Use SimpleDB, store keys with the format but not use the format for any reference, instead use attributes stored with SimpleDB. So, I store the key with attributes like username, type and maybe others I would also have to include in the key format. So if I want to get the account object from user 'Rohan'. I just use SimpleDB Select to query the attribute 'username' and the attribute 'type'. (where I match for 'account')
DynamoDB, store keys and each key will have the illustrated format. I scan the whole database returning every single key. Then get the key and take advantage of the key format, I can use -rangeOfString to match the ones I want and then download from S3.
Also, SimpleDB is apparently geographically-distributed, how can I enable that though?
So which is quicker and more reliable? Using SimpleDB to query keys with attributes. Or using DynamoDB to store all keys, scan (download all keys) and match using e.g. -rangeOfString? Mind the fact that these are just short keys that are pointers to S3 objects.
Here is my last question, and the amount of objects in the database will vary on the decided answer, should I:
Create a separate key/object for every single object a user has
Create an account key/object and store all information inside there
There would be different advantages and disadvantages points between these two options, obviously. For example, it would be quicker to retrieve if it is all separate, but it is also more organized and less large of a dataset for storing it in one users account.
So what do you think?
Thanks for the help! I have put a bounty on this, really need an answer ASAP.
Wow! What a Question :)
Ok, lets discuss some aspects:
S3
S3 Performance is low most likely as you're not adding a Prefix for Listing Keys.
If you sharding by storing the objects like: type/owner/id, listing all the ids for a given owner (prefixed as type/owner/) will be fast. Or at least, faster than listing everything at once.
Dynamo Versus SimpleDB
In general, thats my advice:
Use SimpleDB when:
Your entity storage isn't going to pass over 10GB
You need to apply complex queries involving multiple fields
Your queries aren't well defined
You can leverage from Multi-Valued Data Types
Use DynamoDB when:
Your entity storage will pass 10GB
You want to scale demand / throughput as it goes
Your queries and model is well-defined, and unlikely to change.
Your model is dynamic, involving a loose schema
You can cache on your client-side your queries (so you can save on throughput by querying the cache prior to Dynamo)
You want to do aggregate/rollup summaries, by using Atomic Updates
Given your current description, it seems SimpleDB is actually better, since:
- Your model isn't completely defined
- You can defer some decision aspects, since it takes a while to hit the (10GiB) limits
Geographical SimpleDB
It doesn't support. It works only from us-east-1 afaik.
Key Naming
This applies most to Dynamo: Whenever you can, use Hash + Range Key. But you could also create keys using Hash, and apply some queries, like:
List all my records on table T which starts with accountid:
List all my records on table T which starts with accountid:image
However, those are Scans at all. Bear that in mind.
(See this for an overview: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/API_Scan.html)
Bonus Track
If you're using Java, cloudy-data on Maven Central includes SimpleJPA with some extensions to Map Blob Fields to S3. So give it a look:
http://bitbucket.org/ingenieux/cloudy
Thank you
What is the proper way to perform mass updates on entities in a Google App Engine Datastore? Can it be done without having to retrieve the entities?
For example, what would be the GAE equivilant to something like this in SQL:
UPDATE dbo.authors
SET city = replace(city, 'Salt', 'Olympic')
WHERE city LIKE 'Salt%';
There isn't a direct translation. The datastore really has no concept of updates; all you can do is overwrite old entities with a new entity at the same address (key). To change an entity, you must fetch it from the datastore, modify it locally, and then save it back.
There's also no equivalent to the LIKE operator. While wildcard suffix matching is possible with some tricks, if you wanted to match '%Salt%' you'd have to read every single entity into memory and do the string comparison locally.
So it's not going to be quite as clean or efficient as SQL. This is a tradeoff with most distributed object stores, and the datastore is no exception.
That said, the mapper library is available to facilitate such batch updates. Follow the example and use something like this for your process function:
def process(entity):
if entity.city.startswith('Salt'):
entity.city = entity.city.replace('Salt', 'Olympic')
yield op.db.Put(entity)
There are other alternatives besides the mapper. The most important optimization tip is to batch your updates; don't save back each updated entity individually. If you use the mapper and yield puts, this is handled automatically.
No, it can't be done without retrieving the entities.
There's no such thing as a '1000 max record limit', but there is of course a timeout on any single request - and if you have large amounts of entities to modify, a simple iteration will probably fall foul of that. You could manage this by splitting it up into multiple operations and keeping track with a query cursor, or potentially by using the MapReduce framework.
you could use the query class, http://code.google.com/appengine/docs/python/datastore/queryclass.html
query = authors.all().filter('city >', 'Salt').fetch()
for record in query:
record.city = record.city.replace('Salt','Olympic')