Tuple relational calculus - turing-complete

Is safe tuple relational calculus a turing complete language?

Let's forget about safety. By Codd's theorem, relational calculus is equivalent to first order logic. FOL is very limited, it can't express the fact that there's a route from a point A to point B in some graph (it can express the fact that there's a route from a point A to point B in limited length, for example ∃x ∃y ∃z ∃t route(a,x) and route(x,y) and route(y,z) and route(z,t) and route(t,b) means there's a route of length 4).
See descriptive complexity for a description of what is the strength of different logics.

According to Codd's Theorem, relational algebra and relational calculus are equivalent. It is well-known that relational algebra is not Turing Complete, so neither is relational calculus.
[Edit] You cannot, for instance, do aggregate operations (such as sum, max) or make recursive queries in relational algebra/calculus. See here (near the end).

Related

Owl Formal Semantics

From what I've understood as reading the book Foundations of Semantic Web Technologies concerning owl formal semantics, Hitzler et al have put forward two kinds of model-theoretic semantics for SROIQ: one is the model checking like approach (where we check different interpretations to find the models of our KB) and the other is via predicate logic. In the latter approach, the book just translates SROIQ into predicate logic.
However, the book is a bit confusing for me and I do not know if I have gotten some points right, so here are my questions:
Is model-checking a kind of model-theoretic semantics?
Is translating your SROIQ into predicate logic also a model-theoretic semantics?
How is translating SROIQ into predicate logic a kind of "semantics"? Is that because after the conversion, we can pick up FOL semantics and algorithms?
Thanks!
P.S. This is a link to the book! Just in case!
Model theoretic semantics is how you determine the meaning of axioms - i.e., what rules are available to build a model or to check it is a valid one. Two examples: OWL semantics and RDF semantics. They have a lot of overlap but are not identical.
Model checking does not define semantics, it applies semantic rules defined in a model to actual knowledge bases. Translation to another formalism, e.g., predicates, might maintain the same semantics (i.e., all models stay the same in both formalisms), but this depends on the formalism involved.

Is DataLog equivalent to SQL?

If Datalog is based on first order logic which is equivalent to SQL, how come Datalog can express transitivity (which is inexpressible in SQL/first order logic)?
https://en.wikipedia.org/wiki/Datalog
This clearly means Datalog is more expressive than SQL is,
http://www.learndatalogtoday.org/
Says that it has expressive power of SQL. Does it mean that Datomic is doing a subset of datalog? Or is Datalog First order logic with fixpoints? What am I missing here?
I think you are right. Datalog is first order logic with fixed points, while classical SQL is pure first order logic.
Practically this comes from Datalog allowing recursion and classical SQL not having an expression for recursion.

Difference between homonyms and synonyms in data science with examples

Please share the difference between homonyms and synonyms in data science with examples.
Synonyms for concepts:
When you determine that two concepts are synonyms (say, sofa and couch), you use the class expression owl:equivalentClass. The entailment here is that any instance that was a member of class sofa is now also a member of class couch and vice versa. One of the nice things about this approach is that "context" of this equivalence is automatically scoped to the ontology in which you make the equivalence statement. If you had a very small mapping ontology between a furniture ontology and an interior decorating ontology, you could say in the map that these two are equivalent. In another situation if you needed to retain the (subtle) difference between a couch and a sofa, you do that by merely not including the mapping ontology that declared them equivalent.
Homonyms for concepts:
As Led Zeppelin says, "and you know sometimes words have two meanings…" What happens when a "word" has two meanings is that we have what WordNet would call "word senses." In a particular language, a set of characters may represent more than one concept. One example is the English word "mole," for which WordNet has 6 word senses. The Semantic Web approach is to give each its own namespace; for instance, I might refer to the counterspy mole as cia:mole and the burrowing rodent as the mammal:mole. (These are shortened qnames for what would be full namespace names.) The nice thing about this is, if the CIA ever needed to refer to the rodent they could unambiguously refer to mammal:mole.
Credit
Homonyms- are words that have the same sound but have different in meaning.
2. Synonyms- are words that have the same or almost the same meaning.
Homonyms
Machine learning algorithms are now the subject of ethical debate. Bias, in layman's terms, is a pre-formed view created before facts are known. It applies to an estimating procedure's proclivity to provide estimations or predictions that are, on average, off goal in machine learning and data mining.
A policy's strength can be measured in a variety of ways, including confidence. "Decision trees" are diagrams that show how decisions are being made and what consequences are available. Rescale a statistic to match the scale of other variables in the model to normalise it.
Confidence is a statistician's metric for determining how reliable a sample is (we are 95 percent confident that the average blood sugar in the group lies between X and Y, based on a sample of N patients). Decision tree algorithms are methods that divide data across pieces that are becoming more and more homogeneous in terms of the outcome measure as they advance.
A graph is a graphical representation of data that statisticians call plots and charts. A graph seems to be an information structure that contains the ties and links among items, according to computer programmers. The act of arranging relational databases and their columns such that table relationships are consistent is known as normalisation.
Synonyms
Statisticians use the terms record, instance, sample, or example to describe their data. In computer science and machine learning, this can be called an attribute, input variable, or feature. The term "estimation" is also used, though its use is generally limited to numeric outcomes.
Statisticians call the non-time-series data format a record, or record. In statistics, estimation more often refers to the use of a sample statistic to measure something. Predictive modelling involves developing aggregations of low-level predictors into more informative "features".
The spreadsheet format, in which each column is still a variable, so each row is a record, is perhaps the most common non-time-series data type. Modeling in machine learning and artificial intelligence often begins with some very low-level prediction data.

How to infer correctly?

How do AI based agents infer a decision that are not necessary rational but logical correct based on previous experience.
In the field of AI how do experts system infer, what kind of maths and probabilities are involved here?
I plan on creating an intelligent, but don't no where to start. Pointers or links to any resources would be grateful. Preferably a resources that describes the mathematical concept for those whom are not mathematical minded.
I don't understand your question. In AI parlance, rationality is taken to mean, "Acting in a way, given a situation and a history, that is expected to maximize some performance measure." One does not sacrifice rationality, because that would be acting in a way not expected to maximize performance.
Maybe you are thinking that rationality and predicate- or first order logic are the same thing; they're not.
In any case, your question is too broad to really answer. But, I believe you'll want to start with basic probability, then specifically Bayesian probability and statistics, and then (having the correct tools) you can look into probabilistic AI techniques: Markov chains, Markov decision processes, etc. You can also look at machine learning techniques.
Be aware: These are not simple mathematics. There is no way around that.
Note that this answer speaks to my personal biases; it is not an exhaustive list of techniques.
One approach is to use Propositional Logic or First Order Logic. The latter is more flexible.
First you define the current knowledge and then you can perform inferences applying rules. Prolog is a very powerful programming language for this purpose. In prolog you define you current knowledge using facts and then you can create rules that denote relationships. Then you can perform queries based on your facts and rules you defined.

Relational Algebra instead of SQL

I am studying relational algebra these days and I was wondering...
Don't you thing it would be better if a compiler was existed which could compile relational algebra than compiling SQL?
In which case a database programmer would be more productive?
Is there any research on relational algebra compilers?
Thanks in advance
See Tutorial D by C J Date, he also has a good rant somewhere on the evils of SQL.
Also see datalog, although not exactly relational algebra, is similar.
On my school one student implemented relational algebra parser as a Bachelor thesis. You can test it here:
http://mufin.fi.muni.cz/projects/PA152/relalg/index.cgi
It's in czech language but I think you can get a point.
I tried to write some Relational Algebra queries and it was much better than equivalent queries in SQL! They were much shorter, simplier to write, more straightforward, more understandable. I really enjoyed to write them.
So I don't understand why we all are using SQL when there is Relational Algebra.
There is indeed research on compiling relational algebra
A good place to start:
Thomas Neumann: Efficiently Compiling Efficient Query Plans for Modern Hardware. PVLDB 4(9): 539-550 (2011)

Resources