Tommy, jill and travelor belong to the Sc club.Every member of sc club is either a surfer or a bike rider or both.No bike rider likes a rainy day and all the surfers like a sunny day.Jill like whatever Tommy likes and likes whatever tommy dislikes.Tommy likes a rainy day and a sunny day.
I want to represent the above information in first order predicate logic in such a way that I can represent the question " who is a member of SC club who is a bike rider but not a surfer?" as a predicate logic expression.
What first order inference rule I should pick- forward chaining, backward chaining, or resolution refutation.??
First off this question sounds like it is being asked directly out of a book. If that is the case, it might help if you reference the book in your question. If you are truly stuck after trying to work it out, then ask yourself this...
How does each inference rule work, and what purpose does it serve toward finding solutions in first order logic problems? Once you know that, either...
you wont understand it, but you will have a better question to ask about a particular technique
the obvious answer will jump out at you
you will realize which of those techniques can work for your problem and just choose one
Showing that you have taken some time to try and figure out the problem before posting a book style question on stackoverflow will make other people more likely to help you. You will also have questions that show your lack of conceptual understanding, which is a very good reason to post a question here, as opposed to "answer my homework" sounding questions such as this.
Related
I have a problem that I think would be solved relatively quickly with a loop. I have to work with SPSS and I think it can only be solved in syntax.
Unfortunately I am not good with loops, so I hope that one of you can help me.
I have done a study on reasons for abortions. Now I would like to present the distribution of reasons.
The problem is that each person was first asked about all their pregnancies (because this is also relevant for the later analysis), then the pregnancy was determined to which the questionnaire will further refer.
So the further questionnaire was only about one of the pregnancies, whereas the first questions (f.ex. year of pregnancy, reason for abortion) were answered for each pregnancy. For the reasons I only need the information that refers to the pregnancy that was also used for the further questionnaire.
I have an index variable that determines the loop at which pass the relevant pregnancy is asked ("index"). Then I have the variable "Loop_1_R" to "Loop_5_R" which queries the reasons for each up to 5 abortions (of course, for each woman, only the number of pregnancies that she also indicated). In between there are some missing data, for ex. it could be that a woman said that she had 5 pregnancies, but only two of them were abortions (f.ex. the third and fifth). So then she would only give reasons for an abortion in loop3 and loop5.
Now I want to create a new variable which contains only the reason which refers to the relevant pregnancy. So for each woman only one value. I was thinking, you could build a loop in the sense of calculate new variable in such a way that loop i is taken at index i.
I could of course do it by hand, but with a VPN count of over 3000 it will obviously take considerably longer.
I hope someone can help me! This is an example dataset with less loops and VPN:
You can use do repeat to loop and catch the value you need this way:
do repeat vr=Loop_1_R to Loop_5_R/vl=1 to 5.
if Index=vl reason=vr.
end repeat.
I have been recently coding on codingames.com. In that I came across few problem in which it seems we need to use genetic algorithm for finding the best path for my bot.
First I started up with basic if else statement algorithms, which was just fine to make me reach to the bronze league of the contest. But then this algorithm made me no good. I searched on net on how to go ahead, and most of the winners of the bot programming suggested that they used genetic algorithm for the purpose.
I searched on net about GA, and got to know that we start with a given population and then we do some crossover and mutation thing to find the fittest genes existing in the population.
But my question is that how to apply the logic in the bot designing where we have to decide the thrust given to the bot and the degree of turning for the bot.
Here is the link to the question - https://www.codingame.com/ide/puzzle/coders-strike-back
I would be really glad if someone could send me not just the genes description for this problem as its already available on - https://www.codingame.com/blog/one-hour-learn-bot-programming/
I know the genes or genome which I may use. I want to know that how can I use it to predict my path. Would be glad if someone shares a pseudo code of how the algorithm works in this question.
There is a -3 velocity formula that will get you to your goal:
print nextcheckpointx-3*velocity, cextcheckpointy-3*velocity, 'BOOST'
I am automating a process which asks questions (via SMS but shouldn't matter) to real people. The questions have yes/no answers, but the person might respond in a number of ways such as: sure, not at this time, yeah, never or in any other way that they might. I would like to attempt to parse this text and determine if it was a yes or no answer (of course it might not always be right).
I figured the ideas and concepts to do this might already exist as it seems like a common task for an AI, but don't know what it might be called so I can't find information on how I might implement it. So my questions is, have algorithms been developed to do this kind of parsing and if so where can I find more information on how to implement them?
This can be viewed as a binary (yes or no) classification task. You could write a rule-based model to classify or a statistics-based model.
A rule-based model would be like if answer in ["never", "not at this time", "nope"] then answer is "no". When spam filters first came out they contained a lot of rules like these.
A statistics-based model would probably be more suitable here, as writing your own rules gets tiresome and does not handle new cases as well.
For this you need to label a training dataset. After a little preprocessing (like lowercasing all the words, removing punctuation and maybe even a little stemming) you could get a dataset like
0 | never in a million years
0 | never
1 | yes sir
1 | yep
1 | yes yes yeah
0 | no way
Now you can run classification algorithms like Naive Bayes or Logistic Regression over this set (after you vectorize the words in either binary, which means is the word present or not, word count, which means the term frequency, or a tfidf float, which prevent bias to longer answers and common words) and learn which words more often belong to which class.
In the above example yes would be strongly correlated to a positive answer (1) and never would be strongly related to a negative answer (0). You could work with n-grams so a not no would be treated as a single token in favor of the positive class. This is called the bag-of-words approach.
To combat spelling errors you can add a spellchecker like Aspell to the pre-processing step. You could use a charvectorizer too, so a word like nno would be interpreted as nn and no and you catch errors like hellyes and you could trust your users to repeat spelling errors. If 5 users make the spelling error neve for the word never then the token neve will automatically start to count for the negative class (if labeled as such).
You could write these algorithms yourself (Naive Bayes is doable, Paul Graham has wrote a few accessible essays on how to classify spam with Bayes Theorem and nearly every ML library has a tutorial on how to do this) or make use of libraries or programs like Scikit-Learn (MultinomialNB, SGDclassifier, LinearSVC etc.) or Vowpal Wabbit (logistic regression, quantile loss etc.).
Im thinking on top of my head, if you get a response which you dont know if its yes / no, you can keep the answers in a DB like unknown_answers and 2 more tables as affirmative_answers / negative_answers, then in a little backend system, everytime you get a new unknown_answer you qualify them as yes or no, and there the system "learns" about it and with time, you will have a very big and good database of affirmative / negative answers.
In terms of Artificial Intelligent and Logic Knowledge, What is the difference between sound and unsound reasoning?
Also, what kind of search Does ID3 algorithm use? Is it Breadth-first search?
Thanks
Reasoning is sound if the premises are true and the conclusion can be drawn from just those premises. For example:
An answer upvote gets you 10 rep
Jack has 4 answer upvotes
Jack has 40 rep
is sound (ignoring other rep factors :) ). If it read:
An answer upvote gets you 50 rep
Jack has 4 answer upvotes
Jack has 200 rep
the reasoning would be valid, but not sound, because one of the premises is false
Two questions, not closely related. I answer only the first - do start a new SO question for the second.
There are two meanings of sound in logic. The first, which is prevalent in philosophy, is the one Michael gave. The second —which is generally used in formal logic, by logicians influenced by the terminology of model theory— is that sound inferences are truth-preserving, i.e., whenever the premises are true, so is the conclusion, or in other words, the premises imply the conclusion.
Note that the first is more demanding than the second: on the first account the premises of sound arguments need to be true, whilst on the second they do not. So all reasoning that is account-#1 sound are account-#2 sound, but not vice-versa, and Michael's post explains why: the first of his examples is sound according to both criteria, whilst the second is sound only according to the second.
I think that in AI the second definition is more prevalent, but seeing as how AI is such a diverse discipline, with heavy influences from philosophy, you might well encounter the first. When I taught AI, I used the second.
I don't know where the first definition came from, but the second is from Tarski. People who use the first definition of soundness use the term valid to talk about truth-preserving arguments. See the Internet Encyclopedia of Philosophy on Validity and Soundness for a discussion of the first definition, and Wikipedia's article on Soundness for an explanation of the second.
A logic consists of a set of proposition and inference rules on these.
Given a logic L every proposition p that can be derived by the successive application of inference rules is said to be sound.
Any proposition p that cannot be derived can be said to be unsound, but no one says that. We just say that it is not in L.
A logic L is complete if every statement p that you (as an intelligent human) think should be true is sound.
Thus, we seek sound and complete logics.
This question sounds like a homework question for AI 101.
I'm trying to implement application that can determine meaning of sentence, by dividing it to smaller pieces. So I need to know what words are subject, object etc. so that my program can know how to handle this sentence.
This is an open research problem. You can get an overview on Wikipedia, http://en.wikipedia.org/wiki/Natural_language_processing. Consider phrases like "Time flies like an arrow, fruit flies like a banana" - unambiguously classifying words is not easy.
You should look at the Natural Language Toolkit, which is for exactly this sort of thing.
See this section of the manual: Categorizing and Tagging Words - here's an extract:
>>> text = nltk.word_tokenize("And now for something completely different")
>>> nltk.pos_tag(text)
[('And', 'CC'), ('now', 'RB'), ('for', 'IN'), ('something', 'NN'),
('completely', 'RB'), ('different', 'JJ')]
"Here we see that and is CC, a coordinating conjunction; now and completely are RB, or adverbs; for is IN, a preposition; something is NN, a noun; and different is JJ, an adjective."
I guess there is not "simple" way to do this. You have to build a linguistic analyzer (which is quite possible), however, a language as a lot of exceptional cases. And that is what makes implementing a linguistic analyzer that hard.
The specific problem you mention, the identification of the subject and objects of a clause, is accomplished by syntactic parsing. You can get a good idea of how parsing works by using this demo of parsing software developed by Stanford University.
However, syntactic parsing does not determine the meanining of a sentence, only its structure. Determining meaning (semantics) is a very hard problem in general and there is no technology that can really 'understand' a sentence in the same way that a human would. Although there is no general solution, you may be able to do something in a very restricted subject domain. For example, is the data you want to analyse about a narrow topic with a limited set of 'things' that people talk about?
StompChicken has given the right answer to this question, but I'd like to add that the concepts of subject and object are known as grammatical relations, and that Briscoe and Carroll's RASP is a parser that can go the extra step of deducing a list of relations from the parse.
Here's some example output from their demo page. It's an extract from the output for a sentence that begins "We describe a robust accurate domain-independent approach...":
(|ncsubj| |describe:2_VV0| |We:1_PPIS2| _)
(|dobj| |describe:2_VV0| |approach:7_NN1|)