Most triple stores I read about are said to be scalable to around .5 billion triples.
I am interested to know if people think there is a theoretical reason to why they have to have an upper limit, and whether you know of any particular ways to make them more scalable.
I am curious to know if existing triple stores do things like this:
Represent URIs with integers
Integers in order
Search the integers instead of the URIs which I would imagine must be faster (because you can do things like a binary search etc.)
Thoughts ...
Just to get to 500million a triple store has to do all of that and more. I have spent several years working on a triple store implementation, and I can tell you that breaking 1 billion triples is not as simple as it may seem.
The problem is that many rdf queries are 2nd or 3rd order (and higher-orders are far from unheard of). This means that you are not only querying a set of entities, but simultaneously the data about the set of entities; data about the entities schemas; data describing the schema language used to describe the entities schemas.
All of this without any of the constraints available to a relational database to allow it to make assumptions about the shape of this data/metadata/metametadata/etc.
There are ways to get beyond 500 million, but they are far from trivial, and the low hanging fruit (ie. the approaches you have mentioned) were required just to get to where we are now.
That being said, the flexibility provided by an rdf-store, combined with a denotational semantic available via its interpretation in Description Logics, makes it all worthwhile.
Related
Apologies if this is a beginner question. I’m building a text-to-speech model. I was wondering if my training dataset should be “realistically” distributed (i.e. same distribution as the data it will be used on), or should it be uniformly distributed to make sure it performs well on all kinds of sentences. Thanks.
I’d say that this depends on the dataset size. If you have a really, really small dataset, which is common in some domains and rare in others, then you’d want to ensure that all the “important kinds of data” (whatever that means for your task) would be represented there even if they’re relatively rare, but a realistic distribution is better if you have a large enough dataset that all the key scenarios would be adequately represented anyway.
Also, if mistakes on certain data items are more important than others (which is likely in some domains), then it may make sense to overrepresent them, as you’re not optimizing for the average case of the real distribution.
There’s also the case of the targeted annotation where you look at the errors your model is making and specifically annotate extra data to overrepresent those cases - because there are scenarios where some types of data happen to be both very common and trivial to solve, so adding extra training data for them takes effort but doesn’t change the results in any way.
So I'm making a database for a personal project just to get more than my feet wet with PostgreSQL and certain languages and applications that can use a PostgreSQL database.
I've come to the realization that using an array isn't necessarily even compliant (Arrays are not atomic, right?) with 1NF. So my question is: Is there a lack of efficiency or data safety this way? Should I learn early to not use arrays?
Short answer to the title: No
A bit longer answer:
You should learn to use arrays when appropriate. Arrays are not bad design themselves, they are as atomic as a character varying field (array of characters, no?) and they exists to make our lives easier and our databases faster and lighter. There are issues considering portability (most database systems don't support arrays, or do so in a different way than Postgres)
Example:
You have a blog with posts and tags, and each post may have 0 or more tags. The first thing that comes to mind is to make a different table with two columns postid and tagid and assign the tags in that table.
If we need to search through posts with tagid, then the extra table is necessary (with appropriate indexes of course).
But if we only want the tag information to be shown as the post's extra info, then we can easily add an integer array column in the table of posts and extract the information from there. This can still be done with the extra table, but using an array reduces the size of the database (no needed extra tables or extra rows) and simplifies the query by letting us execute our select queries with joining one less table and seems easier to understand by human eye (the last part is in the eye of the beholder, but I think I speak for a majority here). If our tags are preloaded, then not even one join is necessary.
The example may be poor but it's the first that came to mind.
Conclusion:
Arrays are not necessary. They can be harmful if you use them wrong. You can live without them and have a great, fast and optimized database. When you are considering portability (e.g. rewriting your system to work with other databses) then you must not use arrays.
If you are sure you'll stick with Postgres, then you can safely use arrays where you find appropriate. They exist for a reason and are neither bad design nor non-compliant. When you use them in the right places, they can help a little with simplicity of database structures and your code, as well as space and speed optimization. That is all.
Whether an array is atomic depends on what you're interested in. If you generally want the whole array then it's atomic. If you are more interested in the individual elements then it is being used as structure. A text field is basically a list of characters. However, we're usually interested in the whole string.
Now - from a practical viewpoint, many frameworks and ORMs don't automatically unpack PostgreSQL's array types. Also, if you want to port the database to e.g. MySQL then you'll
Likewise foreign-key constraints can't be added to an array (EDIT: this is still true as of 2021).
Short answer: Yes, it is bad design. Using arrays will guarantee that your design is not 1NF, because to be 1NF there must be no repeating values. Proper design is unequivocal: make another table for the array's values and join when you need them all.
Arrays may be the right tool for the job in certain limited circumstances, but I would still try hard to avoid them. They're a feature of last resort.
The biggest problem with arrays is that they're a crutch. You know them already and you want to use them because they're familiar to you. But they do not work quite like you expect, and they will only allow you to postpone a true understanding of SQL and relational databases. You're much better off waiting until you're forced to use them than learning them and looking for opportunities to rely on them.
I believe arrays are a useful and appropriate design in cases where you're working with array-like data and want to use the power of SQL for efficient queries and analysis. I've begun using PostgreSQL arrays regularly for data science purposes, as well as in PostGIS for edge cases, as examples.
In addition to the well-explained challenges mentioned above, I'm finding the biggest problem in getting third-party client apps to be able to handle the array fields in ways I'd expect. In Tableau and QGIS, for example, arrays are treated as strings, so array operations are unavailable.
Arrays are a first class data type in the SQL standard, and generally allow for a simpler schema and more efficient queries. Arrays, in general, are a great data type. If your implementation is self-contained, and doesn't need to rely on third-party tools without an API or some other middleware that can deal with incompatibilities, then use the array field.
IF, however, you interface with third-party software that directly queries the DB, and arrays are used to produce queries, then I'd avoid them in favor of simpler lookup tables and other traditional relational approaches.
I'm working on a fantasy turn base game.
I now have to create the database structure for my spells. The problem is that I don't really have a good idea on how to create it. Maybe the effects of those spells should not be stored in a database?
For instance, effects could be; increase attack, pull an enemy, heal, teleport, hide, put a mine and so on... Effects are pretty different and I would like the database structure to be extensible.
Edit:
It's a turn based game, time is the same as turns and distance represents the squares.
Some examples of what I mean below.
Let's say we have Incinerate:
it can target only 1 enemy (not ally)
it can be casted at a distance of 3 squares
it deals 5 damage per turn
it lasts 3 turns
Now we can take Shock Wave:
it travels in a line for 4 squares
it starts from a square near the caster
it damages the first target it hits (ally or enemy)
it deals 5 damage to the target and knocks it back 1 square
And the last one Rain Call:
it can be casted at any distance
it's a cloud the size of a 5x5 square
it can target both ally and enemies
only fire creatures take damage
while casting the caster is immobilized and it loses 5 mana/turn
As you can see there are a lot of possible columns: the distance it travels, turns, casting distance, type (damage, heal, armor, etc), value (+2), target (enemy, ally, both), size, etc.
I would not use a relational database for storing spells. Relational databases are good in cases when most of the following conditions apply:
you have very large amount of data,
the data can logically be organized as n-ary relations (tables, rows, columns),
you have many users that access to the data concurrently,
you need ACID properties,
et cetera
Databases are like trucks. They are big. They are difficult to use. They are expensive. (in terms of needed expertise, maintenance time, run time efficiency, etc. if not monetarily) They are very good at what they are good at, but not at anything else. Don't use a truck when a bicycle would suffice.
Let's come to your problem. The number of different types of spells is surely bounded and known at compile time, why don't you define an interface ISpell, and let each spell type be a class that implements ISpell? (You can also define an abstract class for common code) Then a SpellFactory may construct and provide access to all the spells when the program starts. Do you really need the spells be accessible from outside independent of your code?
If hard coding a SpellFactory is not flexible enough for your purposes, you can use xml configuration files. <spell type="blind" description="bla bla" picture="file.jpg"> <effects> <effect .. /> .. </effects> <range>5</range> etc. I don't know much about computer games, but this is what they did in sid meier civilization game, for example. Then, instead of hard coding the different spells in the SpellFactory, you can let it read them from the configuration file at the start up.
As far as I can see, using configuration files instead of a database has the following advantages:
It is a fast, easy, lightweight solution,
It is much more flexible than having all the spells having the same set of columns, (most of which will not make sense for a specific spell)
It is much easier to have more than one version of set of spells at the same time, for experiments, variations, etc,
You can let end users access and manipulate xml files for customizing the game without letting them access the database that would also contain sensitive data,
et cetera.
The disadvantages:
More people know about relational databases than xml format, so you might need a couple of hours to learn how to read and manipulate xml "elements".
Your question is pretty large. It depends on a lot of things, are you going to load the spell during runtime? Maybe you will load them at the beginning of the game? What database will you be using?
Amit Bhargava's suggestion is good and has the advantage of being user-understandable. However string are pretty slow, so what you could do is use flags in your spell table. Then, based on the flag you know which type of spell it is.
Questions
I want to classify/categorize/cluster/group together a set of several thousand websites. There's data that we can train on, so we can do supervised learning, but it's not data that we've gathered and we're not adamant about using it -- so we're also considering unsupervised learning.
What features can I use in a machine learning algorithm to deal with multilingual data? Note that some of these languages might not have been dealt with in the Natural Language Processing field.
If I were to use an unsupervised learning algorithm, should I just partition the data by language and deal with each language differently? Different languages might have different relevant categories (or not, depending on your psycholinguistic theoretical tendencies), which might affect the decision to partition.
I was thinking of using decision trees, or maybe Support Vector Machines (SVMs) to allow for more features (from my understanding of them). This post suggests random forests instead of SVMs. Any thoughts?
Pragmatical approaches are welcome! (Theoretical ones, too, but those might be saved for later fun.)
Some context
We are trying to classify a corpus of many thousands of websites in 3 to 5 languages (maybe up to 10, but we're not sure).
We have training data in the form of hundreds of websites already classified. However, we may choose to use that data set or not -- if other categories make more sense, we're open to not using the training data that we have, since it is not something we gathered in the first place. We are on the final stages of scraping data/text from websites.
Now we must decide on the issues above. I have done some work with the Brown Corpus and the Brill tagger, but this will not work because of the multiple-languages issue.
We intend to use the Orange machine learning package.
According to the context you have provided, this is a supervised learning problem.
Therefore, you are doing classification, not clustering. If I misunderstood, please update your question to say so.
I would start with the simplest features, namely tokenize the unicode text of the pages, and use a dictionary to translate every new token to a number, and simply consider the existence of a token as a feature.
Next, I would use the simplest algorithm I can - I tend to go with Naive Bayes, but if you have an easy way to run SVM this is also nice.
Compare your results with some baseline - say assigning the most frequent class to all the pages.
Is the simplest approach good enough? If not, start iterating over algorithms and features.
If you go the supervised route, then the fact that the web pages are in multiple languages shouldn't make a difference. If you go with, say lexical features (bag-o'-words style) then each language will end up yielding disjoint sets of features, but that's okay. All of the standard algorithms will likely give comparable results, so just pick one and go with it. I agree with Yuval that Naive Bayes is a good place to start, and only if that doesn't meet your needs that try something like SVMs or random forests.
If you go the unsupervised route, though, the fact that the texts aren't all in the same language might be a big problem. Any reasonable clustering algorithm will first group the texts by language, and then within each language cluster by something like topic (if you're using content words as features). Whether that's a bug or a feature will depend entirely on why you want to classify these texts. If the point is to group documents by topic, irrespective of language, then it's no good. But if you're okay with having different categories for each language, then yeah, you've just got as many separate classification problems as you have languages.
If you do want a unified set of classes, then you'll need some way to link similar documents across languages. Are there any documents in more that one language? If so, you could use them as a kind of statistical Rosetta Stone, to link words in different languages. Then, using something like Latent Semantic Analysis, you could extend that to second-order relations: words in different languages that don't ever occur in the same document, but which tend to co-occur with words which do. Or maybe you could use something like anchor text or properties of the URLs to assign a rough classification to documents in a language-independent manner and use that as a way to get started.
But, honestly, it seems strange to go into a classification problem without a clear idea of what the classes are (or at least what would count as a good classification). Coming up with the classes is the hard part, and it's the part that'll determine whether the project is a success or failure. The actual algorithmic part is fairly rote.
Main answer is: try different approaches. Without actual testing it's very hard to predict what method will give best results. So, I'll just suggest some methods that I would try first and describe their pros and cons.
First of all, I would recommend supervised learning. Even if the data classification is not very accurate, it may still give better results than unsupervised clustering. One of the reasons for it is a number of random factors that are used during clustering. For example, k-means algorithm relies on randomly selected points when starting the process, which can lead to a very different results for different program runnings (though x-means modifications seems to normalize this behavior). Clustering will give good results only if underlying elements produce well separated areas in the feature space.
One of approaches to treating multilingual data is to use multilingual resources as support points. For example, you can index some Wikipedia's articles and create "bridges" between same topics in different languages. Alternatively, you can create multilingual association dictionary like this paper describes.
As for methods, the first thing that comes to mind is instance-based semantic methods like LSI. It uses vector space model to calculate distance between words and/or documents. In contrast to other methods it can efficiently treat synonymy and polysemy. Disadvantage of this method is a computational inefficiency and leak of implementations. One of the phases of LSI makes use of a very big cooccurrence matrix, which for large corpus of documents will require distributed computing and other special treatment. There's modification of LSA called Random Indexing which do not construct full coocurrence matrix, but you'll hardly find appropriate implementation for it. Some time ago I created library in Clojure for this method, but it is pre-alpha now, so I can't recommend using it. Nevertheless, if you decide to give it a try, you can find project 'Clinch' of a user 'faithlessfriend' on github (I'll not post direct link to avoid unnecessary advertisement).
Beyond special semantic methods the rule "simplicity first" must be used. From this point, Naive Bayes is a right point to start from. The only note here is that multinomial version of Naive Bayes is preferable: my experience tells that count of words really does matter.
SVM is a technique for classifying linearly separable data, and text data is almost always not linearly separable (at least several common words appear in any pair of documents). It doesn't mean, that SVM cannot be used for text classification - you still should try it, but results may be much lower than for other machine learning tasks.
I haven't enough experience with decision trees, but using it for efficient text classification seems strange to me. I have seen some examples where they gave excellent results, but when I tried to use C4.5 algorithm for this task, the results were terrible. I believe you should get some software where decision trees are implemented and test them by yourself. It is always better to know then to suggest.
There's much more to say on every topic, so feel free to ask more questions on specific topic.
When using a DBM database (e.g. Berkeley or GDBM), is it better to store data using fewer long strings or more short strings? I can easily structure my data either way. I'm looking for 'better' in the performance sense, but I'm interested in other implications as well.
Berkeley DB, or any other DBM implementation, will incur overhead for each key/value pair. If you're dealing with millions of k/v pairs the overhead will matter, otherwise it's noise and you should choose what is easiest for you the programmer and let the database deal with the data. Overhead and access time will also depend on access method. Hash tables and B-Trees are totally different algorithmic animals. If your data has any degree of key ordering or access patterns dependent on keys then 99% of the time B-Trees are the way to go.
I think you're asking a great design question, but I think for anyone to give you a perfect answer we'd all have to know a lot more about the amount of data your dealing with, access patterns, and many other factors.
If you will be frequently searching or modifying the data, a greater number of short strings will provide better performance.
i.e. You don't want to be searching for a substring of one of those long strings, or modifying some value in the middle of a string frequently.
I think this question is really hard to answer in a completely generic way. There are so many variables here, that you would really need to test some common scenarios to determine the answer that is best for you.
Some factors to consider:
Will larger strings require substring searches?
What kind of searches will you perform over the data?
In the end, its generally better to go with the approach that yields the most normalized schema. Optimization can start from there, and depending upon your db, there are probably better alternatives than restructuring the underlying schema purely for performance.