I have to implement a fuzzy matching solution for a client and am going to use Damerau-Levenshtein for that. So far so good, but I'm concerned about cascades/collapse/chains, or however you would like to call it where A matches B, and B matches C, but A doesn't match C, and C might match something else in turn, etc... In theory all records could collapse onto one record. But even if that doesn't happen, what is the industry standard of handling this problem? All the sources I've read seem to conveniently ignore this problem, but to me that seems the actual hard part of fuzzy matching, not the trivial choice of the edit-distance.
Is the industry standard to just ignore this? Or to allow the cascade and just control the order in which it happens, or is the answer "it depends on what the client wants"?
This isn't even exclusive to fuzzy matching, anytime you have an "OR" statement in your match criteria you would run into the same problem, but I never see it addressed anywhere.
I played around with different solutions I thought of myself, but I'm not sure there's a definitive correct answer.
Related
I'm trying to write a program which can solve a maze in PDDL, for example by using graphplan. From the examples I have seen on the internet one gets A solution to the problem(e.g. PDDL Graphplan can't find plan), but only one. I have some specific restrictions on my project which requires me to get ALL possible solutions to solve the maze and then evaluate theses solutions separately. Is this possible?
PDDL is a specification for describing problems, it has nothing to do with the output. The implementation of the search system is in charge of returning the results. For most of the competitions that use PDDL, they only require a single plan for the result and so many of the planning systems out there return only a single result (the ones I've seen from the competitions). If you're rolling you're own, then you can just return all, or if the one you choose is open source, it's probably not that difficult to update it to support multiple optimal plans when found.
At first this problem seems trivial: given two ontologies, which term in ontology A best refers to a term in ontology B.
But its simplicity is deceptive: this problem is extremely hard and has currently lead to thousands of academic publications, without any consensus on how to solve this problem.
Naively, one would expect that simply looking at the term "Heart Attack" in both ontologies would suffice.
However, ontologies almost never encode the same phrase.
In simple cases "Heart Attack" might be coded as "Heart Attacks", or "Heart attack (non-fatal)", but in more complicated cases it might only be coded as "Myocardial infarction".
In other cases it is even more complicated, for example dealing with compound (composed) terms.
More importantly, simply matching the term (or string) ignores the "ontological structure".
What if "Heart Attack" in ontology A is coded as caused-by high blood pressure, whereas in ontology B it might be coded as withdrawl-from-trial-non-fatal.
In this case it might be valid to match the two terms, but not trivially so.
And this assumes the equivalent term exists at all.
It's a classical problem called Semantic/Ontology Matching, Alignment, or Harmonization. The research out there involves lexical similarity, term usage in free text, graph homomorphisms, curated mappings (like MeSH/WordNet), topic modeling, and logical inference (first- or higher-order logic). But which is the most user friendly and production ready solution, that can be integrated into a Java(/Clojure) or Python app? I've looked at Ontology matching: A literature review but they don't seem to recommend anything ... any suggestions or experiences?
Have a look at http://oaei.ontologymatching.org/2014/results/ . There were several tracks open for matchers to be sent in and be evaluated. Not every matcher participates in every track. So you might want to read the track descriptions and pick one that seems to be the most similar to your problem. For example if you don't have to deal with multiple languages you probably don't have to check the MultiFarm track. After that check the results by having a look at Recall, Precision and F-Measure and decide for yourself. You also might want to check out some earlier years.
I work at a public health department that takes in and stores lots of medical data every day. I've written a program that uses regular expressions to determine if particular fields in the incoming data are valid or invalid. Ex: DOBs come in as YYYYmmDD, so they should match regex ^[0-9]{8}$
I want to analyze the "invalid" data to help identify problems in our system (we get way too much data to go through each 'bad' record row-by-row). Can anyone suggest AI techniques/machine learning techniques that can 'monitor' the bad data and find patterns in what is wrong? I think that coming up with a bunch of regular expressions for possible ways the data could be invalid (ex. not enough or too many characters) and then keeping track of those results might work. But instead of me thinking up all of the ways the data could be invalid, I'm curious about ways to 'learn' the patterns from the bad data using AI.
Are there any known techniques that do this?
I think that coming up with a bunch of regular expressions for possible ways the data could be invalid (ex. not enough or too many characters) and then keeping track of those results might work. But instead of me thinking up all of the ways the data could be invalid, I'm curious about ways to 'learn' the patterns from the bad data using AI.
What's funny is I'm reminded of a quotation usually attributed to Jamie Zawinski:
Some people, when confronted with a problem, think "I know, I'll use
regular expressions." Now they have two problems.
Except, in this case, I think the hand-crafted regex route is actually your best bet!
Irony of ironies.
Anyway.
The point of this saying is that people tend to overcomplicate their solutions. Here, regexs are actually a fairly simple solution to your problem, whereas creating a learner is something that will take you a lot more time than I think you realize.
There are fewer ways for this very constrained data representation (a date) to be expressed correctly, than there are ways for it to be expressed incorrectly. Because there are infinite ways to define bad data. You want to train a learner to detect all of them? It's a rabbit hole. Think of this AI learner instead as a coworker or a friend: how would you describe to them all the ways that dates can't be represented properly?
While your intention was to make less work for yourself in the long run -- and that's a good quality to have -- figuring out how to develop a learner, not to mention train and validate it, not to mention watch it carefully, outweigh any benefits that learner can provide you in such a narrow use case.
Bayesian filtering might be what you are looking for.
It sounds like you want to apply supervised learning to regular expressions. These fellows seem to be up to something of that sort.
Perhaps look for techniques of "outlier detection"?
Questions
I want to classify/categorize/cluster/group together a set of several thousand websites. There's data that we can train on, so we can do supervised learning, but it's not data that we've gathered and we're not adamant about using it -- so we're also considering unsupervised learning.
What features can I use in a machine learning algorithm to deal with multilingual data? Note that some of these languages might not have been dealt with in the Natural Language Processing field.
If I were to use an unsupervised learning algorithm, should I just partition the data by language and deal with each language differently? Different languages might have different relevant categories (or not, depending on your psycholinguistic theoretical tendencies), which might affect the decision to partition.
I was thinking of using decision trees, or maybe Support Vector Machines (SVMs) to allow for more features (from my understanding of them). This post suggests random forests instead of SVMs. Any thoughts?
Pragmatical approaches are welcome! (Theoretical ones, too, but those might be saved for later fun.)
Some context
We are trying to classify a corpus of many thousands of websites in 3 to 5 languages (maybe up to 10, but we're not sure).
We have training data in the form of hundreds of websites already classified. However, we may choose to use that data set or not -- if other categories make more sense, we're open to not using the training data that we have, since it is not something we gathered in the first place. We are on the final stages of scraping data/text from websites.
Now we must decide on the issues above. I have done some work with the Brown Corpus and the Brill tagger, but this will not work because of the multiple-languages issue.
We intend to use the Orange machine learning package.
According to the context you have provided, this is a supervised learning problem.
Therefore, you are doing classification, not clustering. If I misunderstood, please update your question to say so.
I would start with the simplest features, namely tokenize the unicode text of the pages, and use a dictionary to translate every new token to a number, and simply consider the existence of a token as a feature.
Next, I would use the simplest algorithm I can - I tend to go with Naive Bayes, but if you have an easy way to run SVM this is also nice.
Compare your results with some baseline - say assigning the most frequent class to all the pages.
Is the simplest approach good enough? If not, start iterating over algorithms and features.
If you go the supervised route, then the fact that the web pages are in multiple languages shouldn't make a difference. If you go with, say lexical features (bag-o'-words style) then each language will end up yielding disjoint sets of features, but that's okay. All of the standard algorithms will likely give comparable results, so just pick one and go with it. I agree with Yuval that Naive Bayes is a good place to start, and only if that doesn't meet your needs that try something like SVMs or random forests.
If you go the unsupervised route, though, the fact that the texts aren't all in the same language might be a big problem. Any reasonable clustering algorithm will first group the texts by language, and then within each language cluster by something like topic (if you're using content words as features). Whether that's a bug or a feature will depend entirely on why you want to classify these texts. If the point is to group documents by topic, irrespective of language, then it's no good. But if you're okay with having different categories for each language, then yeah, you've just got as many separate classification problems as you have languages.
If you do want a unified set of classes, then you'll need some way to link similar documents across languages. Are there any documents in more that one language? If so, you could use them as a kind of statistical Rosetta Stone, to link words in different languages. Then, using something like Latent Semantic Analysis, you could extend that to second-order relations: words in different languages that don't ever occur in the same document, but which tend to co-occur with words which do. Or maybe you could use something like anchor text or properties of the URLs to assign a rough classification to documents in a language-independent manner and use that as a way to get started.
But, honestly, it seems strange to go into a classification problem without a clear idea of what the classes are (or at least what would count as a good classification). Coming up with the classes is the hard part, and it's the part that'll determine whether the project is a success or failure. The actual algorithmic part is fairly rote.
Main answer is: try different approaches. Without actual testing it's very hard to predict what method will give best results. So, I'll just suggest some methods that I would try first and describe their pros and cons.
First of all, I would recommend supervised learning. Even if the data classification is not very accurate, it may still give better results than unsupervised clustering. One of the reasons for it is a number of random factors that are used during clustering. For example, k-means algorithm relies on randomly selected points when starting the process, which can lead to a very different results for different program runnings (though x-means modifications seems to normalize this behavior). Clustering will give good results only if underlying elements produce well separated areas in the feature space.
One of approaches to treating multilingual data is to use multilingual resources as support points. For example, you can index some Wikipedia's articles and create "bridges" between same topics in different languages. Alternatively, you can create multilingual association dictionary like this paper describes.
As for methods, the first thing that comes to mind is instance-based semantic methods like LSI. It uses vector space model to calculate distance between words and/or documents. In contrast to other methods it can efficiently treat synonymy and polysemy. Disadvantage of this method is a computational inefficiency and leak of implementations. One of the phases of LSI makes use of a very big cooccurrence matrix, which for large corpus of documents will require distributed computing and other special treatment. There's modification of LSA called Random Indexing which do not construct full coocurrence matrix, but you'll hardly find appropriate implementation for it. Some time ago I created library in Clojure for this method, but it is pre-alpha now, so I can't recommend using it. Nevertheless, if you decide to give it a try, you can find project 'Clinch' of a user 'faithlessfriend' on github (I'll not post direct link to avoid unnecessary advertisement).
Beyond special semantic methods the rule "simplicity first" must be used. From this point, Naive Bayes is a right point to start from. The only note here is that multinomial version of Naive Bayes is preferable: my experience tells that count of words really does matter.
SVM is a technique for classifying linearly separable data, and text data is almost always not linearly separable (at least several common words appear in any pair of documents). It doesn't mean, that SVM cannot be used for text classification - you still should try it, but results may be much lower than for other machine learning tasks.
I haven't enough experience with decision trees, but using it for efficient text classification seems strange to me. I have seen some examples where they gave excellent results, but when I tried to use C4.5 algorithm for this task, the results were terrible. I believe you should get some software where decision trees are implemented and test them by yourself. It is always better to know then to suggest.
There's much more to say on every topic, so feel free to ask more questions on specific topic.
I understand they aren't real and they seem to branch computation whenever there are 2 options, instead of picking one. But, for example, if I say this:
"Non deterministically guess a bijection p of vertices from Graph G to Graph H" (context here is Graph Isomorphism)
What is that supposed to mean? I understand the bijection, but it says "non deterministically guess". If it's guessing, how is that an algorithmic approach? How can it guarantee it's going to work?
They don't, they just sort of illustrate a point. Basically what they do is guess an answer, and check if it's right(deterministically). It's not the guessing the answer part that's important though, it's checking that the answer is right. It's just like saying given an arbitrary solution, is it correct? So for example there are problems that take exponential time to compute, and some of their answers can be checked in polynomial time, but some can't. So what the non-deterministic TM does is it divides those two, the ones that can be checked quickly from the ones that can't. And then this brings up the bigger question, if one group of questions solutions can be verified much quicker than another, can their solutions also be generated quicker? This question hasn't been answered, yet.
There's different ways to picture one. One I find useful is the oracle model. Did you ever see the Far Side cartoon where a derivation on the blackboard has "Here a miracle occurs" as one of the intermediate steps? In this version of a NDTM, when you need to choose something, the oracle writes the correct version on the right part of the tape. (This is taken from Garey and Johnson, Computers and Intractability, their classic book on NP-complete problems.) You aren't allowed to assume you've got the right one, though, and there may not be a correct one.
Therefore, when you non-deterministically guess a bijection, you're getting the correct bijection for your purposes, provided one exists.
It isn't a good basis for an algorithm, since the complexity of implementing a non-deterministic Turing machine is basically exponential in the nondeterministic states, and the algorithmic equivalent of the nondeterministic guess is to try every possible bijection.
From a theoretical point of view, I'd translate it as "If there is a bijection such that....". From an algorithmic point of view, find another book, or another chapter of the same book, since that approach is useless for even moderately large graphs.
I believe what is meant is "non deterministically choose a solution" and then test that the solution is true. Since all possible choices (guesses) are tested, the solution is guaranteed.
A physical implementation of the non-deterministic Turing machine is the DNA computer. For example, here's an outline of how to solve the traveling salesman problem in DNA:
Get/make a bunch of DNA sequences, each with length proportional to the cost of an edge in your graph and sticky ends with sequences uniquely identifying one of the vertices that the edge connects.
Mix them together, with DNA ligase in a big beaker. They'll anneal to each other in sequences that represent every possible path through the graph (ok, not the really long ones).
Remove all the sequences that are missing at least one vertex. To do this, sequentially select for each vertex using hybridization. For example, if "ACGTACA" encodes vertex 1, select for sequences that bind to "TGTACGA". Then repeat this selection for every other vertex.
Sort the remaining sequences by size using gel electrophoresis. Then sequence the shortest one. The sequence encodes the shortest path through your graph.