How can math programs accept equations of any form? - symbolic-math

For example, I can type into Google or WolframAlpha 6+6, or 2+237, which could be programmed by asking a user for a and b, then evaluating return a+b. However, I might also type 5*5^(e) or any other combination, yet the program is hard-coded to only evaluate a+b expressions.
It's easy to represent the more complex problems in code, on any common language.
return 5*pow(5,Math.E) #pseudocode
But if I can't expect a user's input to be of a given form, then it isn't as simple as
x = Input("enter coefficient")
b = input("enter base")
p = input("enter power")
print(x*pow(b,p))
With this code, I'm locked-in to my program only able to evaluate a problem of the form x*b^p.
How do people write the code to dynamically handle math expressions of any form?

This might not be a question that 'appropriate' for this venue. But I think it's reasonable to ask. At the risk of having my answer voted out of existence along with the question, I'll offer a brief answer.
Legitimate mathematical expressions, from simple to complicated, obey grammatical rules. Although a legal mathematical expression might seem unintelligible, grammatically speaking it will be far less complicated that the grammar needed to understand small bodies of human utterances.
Still, there are levels of 'understanding' built into the products available on the 'net. Google and WolframAlpha are definitely 'high-end'. They attempt to get as close as possible to defining grammars capable of representing human utterance, in effect at least. Nearer the lower end are products such as Sympy which accept much more strictly defined input.
Once the software decides what part of the input is a noun, and what is a verb, so to speak, it proceeds to perform the actions requested.
To understand more you might have to undertake studies of formal language, artificial intelligence, programming and areas I can't imagine.

Related

Is C an imperative or declarative programming language

It is quite confusing to know difference between Imperative and Declarative programming can any one explain difference between both in real world terms?
Kindly clarify whether C is an Imperative or Declarative Language?
C is an imperative programming language.
A one line difference between the two would be Declarative programming is when you say what you want, and imperative language is when you say how to get what you want. In Declarative programming the focus is on what the computer should do rather than how it should do it (ex. SQL) whereas in the Imperative programming the focus is on what steps the computer should take rather than what the computer will do (ex. C, C++, Java).
Imperative programming is a programming paradigm that describes computation in terms of statements that change a program state
Declarative programming is a programming paradigm, a style of building the structure and elements of computer programs, that expresses the logic of a computation without describing its control flow
Many imperative programming languages (such as Fortran, BASIC and C) are abstractions of assembly language.
The wiki says:-
As an imperative language, C uses statements to specify actions. The
most common statement is an expression statement, consisting of an
expression to be evaluated, followed by a semicolon; as a side effect
of the evaluation, functions may be called and variables may be
assigned new values. To modify the normal sequential execution of
statements, C provides several control-flow statements identified by
reserved keywords. Structured programming is supported by if(-else)
conditional execution and by do-while, while, and for iterative
execution (looping). The for statement has separate initialization,
testing, and reinitialization expressions, any or all of which can be
omitted. break and continue can be used to leave the innermost
enclosing loop statement or skip to its reinitialization. There is
also a non-structured goto statement which branches directly to the
designated label within the function. switch selects a case to be
executed based on the value of an integer expression.
Caveat
I am writing with a lot of generalities, so please bear with me.
In Theory
C is imperative, because code reads like a recipe for how to do something. However, if you use a lot of well-named functions and function pointers for polymorphism, it's possible to make C code look like a declarative language.
In imperative languages, you are focused on the algorithm/implementation. Engineering is inherently imperative, because you are focused on efficiency of a process: the cost of doing something in terms of time or money (or memory in CS) required.
In contrast, Mathematics is generally declarative (but writing a proof tends to be more imperative). In math, you care more about correctness and defining invariant relationships/operations, as opposed to how quickly you can get the answer.
Note that many functional languages tend to be declarative in nature (eg R, Lisp).
What does z = x + y mean? (Semantics)
In an imperative language, it means read from memory locations x and y, add those values together, and put the result into memory location z, and do it right now. If you assign a different value to x, you will have to use the z = x + y statement again to recalculate z.
In a declarative (lazy) language, it means z is a variable whose value is the sum of the values of two other variables x and y. The addition operation isn't executed until you try to read the value of z. What's the implication? If you read from z, the value will always be the sum of x and y at that moment in time; you do not need to reissue the statement. In pure declarative languages where there are no variables, a reissue can actually be caught as an error!!!
Keep this example in mind and you will see why mathematicians tend to prefer declarative languages. For example, I can define hypotenuse = sqrt( height^2 + length^2 ) and never worry about having to reissue that statement. The relationship is an invariant that will always hold, just like a mathematical truth always holds.
In Real Life (and why should I care?)
Proponents of declarative languages claim: an efficient solution that is wrong (buggy) is useless. They want bug-free, state-less functions without side-effects that can be reused without modification.
Proponents of imperative languages claim: a correct solution that takes forever to run is also useless. They want control over the memory/speed tradeoff. They want to be able to optimised based on physical and time constraints.
Of course, nothing is 100% imperative or declarative. Imperative code that is correct and well-written implies certain relationships. OTOH, declarative code, in sufficient depth and in conjunction with the language specifications, describes those relationships well enough for the compiler/interpreter to turn your code into a series of CPU instructions.
Because we are dealing with computers, a declarative compiler/interpreter must be smart enough to make time vs memory tradeoffs, whereas in an imperative language, it is up to the programmer to make those decisions more explicitly.
So a declarative language requires that the programmer focus on defining relationships between variables and other invariants. It is up to the compiler/interpreter to turn those relationships into a series of instructions/operations for the CPU. Most declarative compilers/interpreters are smart enough to handle most real-world cases, but may have trouble with edge cases. Unfortunately, in those situations you will have to coax the compiler/interpreter.
Which one is better?
Proponents of declarative languages claim that such languages allow the programmers to focus on the domain and to write code that reads easier for non-programmers. It is easier to write correct code, claim the advocates. However, the trade-off is, coaxing the compiler/interpreter to make the correct memory vs speed tradeoff can require some intricate knowledge of the language. You will understand this problem if you use a declarative language like R or SQL or LISP. It is certainly possible to define a new declarative language which has nothing to do with computers (but doing so may make it harder for the writer of the interpreter/compiler). Many mathematicians and pure CS researchers like declarative languages.
Imperative languages tend to give you finer grained control over the machine. There is no question that you are programming a computer. The trap is, we can end up pre-maturely focusing on unnecessary speed optimisations that hurt code maintenance and readability. In the early days of computing where speed or memory were severely limited, you needed to have imperative languages to get useful work done, optimised correctly for your situation. Engineers and tinkerers tend to gravitate towards imperative languages.
C is an imperative language.
An imperative language specifies how to do what you want. A declarative language specifies what you want, but not how to do it; the language works out how to do it. Prolog is an example of a declarative language.
I would like to comment that some aspects of the C language would be, in the absence of explicit rules, declarative...
int i = 4;
int j = 5;
float f = i/j;
would seem to mean that you intend float to be .80 (and in a declarative language it would be, most likely)... but since there are well defined procedures int/int evaluates to an int using integer division (which in C is floor division).
it is the aspect of the explicitly defined behavior that makes C Imperative.
there is the secret under layer of C where optimizations can be made, as long as they have a guarantee to not change the output of the program, that makes the compiler have some declarative behavior, where the declaration is the behavior of the input C program, but the end result can be anything that matches that C program in functionality
§5.1.2.3 part 10:
Alternatively, an implementation might perform various optimizations
within each translation unit, such that the actual semantics would
agree with the abstract semantics only when making function calls
across translation unit boundaries. In such an implementation, at the
time of each function entry and function return where the calling
function and the called function are in different translation units,
the values of all externally linked objects and of all objects
accessible via pointers therein would agree with the abstract
semantics. Furthermore, at the time of each such function entry the
values of the parameters of the called function and of all objects
accessible via pointers therein would agree with the abstract
semantics. In this type of implementation, objects referred to by
interrupt service routines activated by the signal function would
require explicit specification of volatile storage, as well as other
implementation-defined restrictions.
and a concrete example from the next part:
EXAMPLE 2 In executing the fragment
char c1, c2; /* ... */
c1 = c1 + c2;
the ‘‘integer promotions’’ require that the abstract machine promote
the value of each variable to int size and then add
the two ints and truncate the sum. Provided the addition of two chars
can be done without overflow, or with overflow wrapping silently to
produce the correct result, the actual execution need only produce the
same result, possibly omitting the promotions.
-> Imperative programming: telling the "machine" how to do something, and as a result what you want to happen will happen.
-> Declarative programming: telling the "machine" what you would like to happen, and let the computer figure out how to do it.
So We can say C is an imperative Language.

How to call a structured language that cannot loop or a functional language that cannot return

I created a special-purpose "programming language" that deliberately (by design) cannot evaluate the same piece of code twice (ie. it cannot loop). It essentially is made to describe a flowchart-like process where each element in the flowchart is a conditional that performs a different test on the same set of data (without being able to modify it). Branches can split and merge, but never in a circular fashion, ie. the flowchart cannot loop back onto itself. When arriving at the end of a branch, the current state is returned and the program exits.
When written down, a typical program superficially resembles a program in a purely functional language, except that no form of recursion is allowed and functions can never return anything; the only way to exit a function is to call another function, or to invoke a general exit statement that returns the current state. A similar effect could also be achieved by taking a structured programming language and removing all loop statements, or by taking an "unstructured" programming language and forbidding any goto or jmp statement that goes backwards in the code.
Now my question is: is there a concise and accurate way to describe such a language? I don't have any formal CS background and it is difficult for me to understand articles about automata theory and formal language theory, so I'm a bit at a loss. I know my language is not Turing complete, and through great pain, I managed to assure myself that my language probably can be classified as a "regular language" (ie. a language that can be evaluated by a read-only Turing machine), but is there a more specific term?
Bonus points if the term is intuitively understandable to an audience that is well-versed in general programming concepts but doesn't have a formal CS background. Also bonus points if there is a specific kind of machine or automaton that evaluates such a language. Oh yeah, keep in mind that we're not evaluating a stream of data - every element has (read-only) access to the full set of input data. :)
I believe that your language is sufficiently powerful to encode precisely the star-free languages. This is a subset of that regular languages in which no expression contains a Kleene star. In other words, it's the language of the empty string, the null set, and individual characters that is closed under concatenation and disjunction. This is equivalent to the set of languages accepted by DFAs that don't have any directed cycles in them.
I can attempt a proof of this here given your description of your language, though I'm not sure it will work precisely correctly because I don't have full access to your language. The assumptions I'm making are as follows:
No functions ever return. Once a function is called, it will never return control flow to the caller.
All calls are resolved statically (that is, you can look at the source code and construct a graph of each function and the set of functions it calls). In other words, there aren't any function pointers.
The call graph is acyclic; for any functions A and B, then exactly one of the following holds: A transitively calls B, B transitively calls A, or neither A nor B transitively call one another.
More generally, the control flow graph is acyclic. Once an expression evaluates, it never evaluates again. This allows us to generalize the above so that instead of thinking of functions calling other functions, we can think of the program as a series of statements that all call one another as a DAG.
Your input is a string where each letter is scanned once and only once, and in the order in which it's given (which seems reasonable given the fact that you're trying to model flowcharts).
Given these assumptions, here's a proof that your programs accept a language iff that language is star-free.
To prove that if there's a star-free language, there's a program in your language that accepts it, begin by constructing the minimum-state DFA for that language. Star-free languages are loop-free and scan the input exactly once, and so it should be easy to build a program in your language from the DFA. In particular, given a state s with a set of transitions to other states based on the next symbol of input, you can write a function that
looks at the next character of input and then calls the function encoding the state being transitioned to. Since the DFA has no directed cycles, the function calls have no directed cycles, and so each statement will be executed exactly once. We now have that (∀ R. is a star-free language → ∃ a program in your language that accepts it).
To prove the reverse direction of implication, we essentially reverse this construction and create an ε-NFA with no cycles that corresponds to your program. Doing a subset construction on this NFA to reduce it to a DFA will not introduce any cycles, and so you'll have a star-free language. The construction is as follows: for each statement si in your program, create a state qi with a transition to each of the states corresponding to the other statements in your program that are one hop away from that statement. The transitions to those states will be labeled with the symbols of input consumed making each of the decisions, or ε if the transition occurs without consuming any input. This shows that (∀ programs P in your language, &exists; a star-free language R the accepts just the strings accepted by your language).
Taken together, this shows that your programs have identically the power of the star-free languages.
Of course, the assumptions I made on what your programs can do might be too limited. You might have random-access to the input sequence, which I think can be handled with a modification of the above construction. If you can potentially have cycles in execution, then this whole construction breaks. But, even if I'm wrong, I still had a lot of fun thinking about this, and thank you for an enjoyable evening. :-)
Hope this helps!
I know this question is somewhat old, but for posterity, the phrase you are looking for is "decision tree". See http://en.wikipedia.org/wiki/Decision_tree_model for details. I believe this captures exactly what you have done and has a pretty descriptive name to boot!

What is the use of finite automata? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the use of finite automata? And all the concepts that we study in the theory of computation. I've never seen their uses yet.
They are the theoretical underpinnings of concepts widely used in computer science and programming, and understanding them helps you better understand how to use them (and what their limits are). The three basic ones you should encounter are, in increasing order of power:
Finite automata, which are equivalent to regular expressions. Regular expressions are widely used in programming for matching strings and extracting text. They are a simple method of describing a set of valid strings using basic characters, grouping, and repitition. They can do a lot, but they can't match balanced sets of parentheses.
Push-down automata, equivalent to context-free grammars. Text/input parsers and compilers use these when regular expressions aren't powerful enough (and one of the things you learn in studying finite automata is what regular expressions can't do, which is crucial to knowing when to write a regular expression and when to use something more complicated). Context-free grammars can describe "languages" (sets of valid strings) where the validity at a certain point in parsing the string does not depend on what else has been seen.
Turing machines, equivalent to general computation (anything you can do with a computer). Some of the things you learn when you cover these enable you to understand the limits of computing itself. A good theory course will teach you about the Halting Problem, which enables you to identify problems for which it is impossible to write a program. Once you've identified such a problem, then you know to stop trying (or refine it to something that is possible).
Understanding the theory and limitations of these various computing mechanisms enable you to better understand problems and programs and think more deeply about programming.
There was a request-for-work published about a year ago on one of the freelance coding exchange sites asking, essentially, for a program which solved the Halting Problem. Several people responded with offers, saying they "understood the requirements" and could "start immediately". It was impossible to write a program which met the requirements. Understanding computing theory enables you to not be that bidder who demonstrates, in public, that he really doesn't understand computing (and doesn't bother to thoroughly investigate a problem before declaring understanding and making an offer).
Finite automata are very useful for communication protocols and for matching strings against regular expressions.
Automatons are used in hardware and software applications. Please read the implementation section here http://en.wikipedia.org/wiki/Finite-state_machine#Implementation
There is also a notion of Automata-based programming. Please check this http://en.wikipedia.org/wiki/Automata-based_programming
cheers
Every GUI, every workflow can be treated as a finite automata. Think of each page as a state and transitions occurring due to certain events. Perhaps you can't proceed to a certain page or the next stage of the workflow until a series of conditions are met.
Finite automata are e.g. used to parse formal languages. This means that finite automata are very usefull in the creation of compiler and interpreter techniques.
Historicaly, the finite state machine showed that a lot of problems can be solved by a very simple automate.
Try taking a compilers course. You will very likely make a compiler or interpreter using a finite state automaton to implement a recursive descent parser.
For example to manage states of some objects with defined life cycle.
As example of this: orders in book shop.
An order can have the following states:
-ordered
-payed
-shipping
-done
and program of the finite automata knows how one state can be changed by other.
The finite automata is a type of state machine (SM). In general SMs are used for parsing formal languages.
You can use as a formal language many entities, not only characters.
And regular language is a type of formal language.
There are some theory that show, what type of the SM is better to parse a regular language:
http://en.wikipedia.org/wiki/Regular_language

Can a language be Turing-complete without any support for arrays?

If a language has control structures and variables, but no support for arrays, lists, memory access and allocation, etc, can it be Turing-complete?
Maybe if there was no limit to the amount of variables you can create, you can simulate arrays by creating variables like array_1, array_2, ... array_6000 and manually loop through them, and somehow create complex data structures and recursion?
Edit: Even if you cannot access variables by name manipulation (array_10+i is not allowed)?
Certainly. Have a look at Lambda Calculus, which is one of the most minimal Turing Complete languages I've ever seen. Basically, all you have are lambdas (function literals); no assignment, no declaration, no data structures. It's all very very slimmed-down.
You can, however, simulate a linear data structure like a List by chaining functions together. It gets pretty verbose, but it's certainly possible and it's much nicer than having a large series of sequentially named variables.
Generally speaking, whether or not a language is Turing Complete has nothing to do with whether it has Arrays. Functional languages like SML and Haskell lack arrays, just like Lambda Calculus, and these are actually useful languages! Saying a language is "Turing Complete" is merely another way of saying that there is no Turing Computable function which cannot be expressed in said language. This is a surprisingly loose qualification, allowing many languages which would be completely impractical (like Lambda Calculus).
There's plenty of Turing-complete languages that don't even have the notion of a "variable"! Memory access and allocations are implementation details, so they're completely irrelevant. You have to realize that Turing machines and Turing completeness are very theoretical concepts, useful for proving things, but completely divorced from the reality of actual hardware.
Paul Graham has written a long, but very, very interesting essay on the history of computer languages where he describes the two very different main traditions of computer languages:
Lisp, Scheme, etc. - derived from theoretical considerations, very simple, yet conceptually powerful languages, but for the longest time impractical because of their complete disregard for what's easy and efficient to implement
Assembler, FORTRAN, C and pretty much all "mainstream" languages - derived more or less directly from what the hardware could do, easy to implement, efficient, but for the longest time inferior to the (older!) Lisp family in terms of expressiveness.
It sounds like you know only the second tradition, but Turing completeness is a concept that originates from the same principles as the first tradition and makes little sense if you don't know those principles.

Halting in non-Turing-complete languages

The halting problem cannot be solved for Turing complete languages and it can be solved trivially for some non-TC languages like regexes where it always halts.
I was wondering if there are any languages that has both the ability to halt and not halt but admits an algorithm that can determine whether it halts.
The halting problem does not act on languages. Rather, it acts on machines
(i.e., programs): it asks whether a given program halts on a given input.
Perhaps you meant to ask whether it can be solved for other models of
computation (like regular expressions, which you mention, but also like
push-down automata).
Halting can, in general, be detected in models with finite resources (like
regular expressions or, equivalently, finite automata, which have a fixed
number of states and no external storage). This is easily accomplished by
enumerating all possible configurations and checking whether the machine enters
the same configuration twice (indicating an infinite loop); with finite
resources, we can put an upper bound on the amount of time before we must see
a repeated configuration if the machine does not halt.
Usually, models with infinite resources (unbounded TMs and PDAs, for instance),
cannot be halt-checked, but it would be best to investigate the models and
their open problems individually.
(Sorry for all the Wikipedia links, but it actually is a very good resource for
this kind of question.)
Yes. One important class of this kind are primitive recursive functions. This class includes all of the basic things you expect to be able to do with numbers (addition, multiplication, etc.), as well as some complex classes like #adrian has mentioned (regular expressions/finite automata, context-free grammars/pushdown automata). There do, however, exist functions that are not primitive recursive, such as the Ackermann function.
It's actually pretty easy to understand primitive recursive functions. They're the functions that you could get in a programming language that had no true recursion (so a function f cannot call itself, whether directly or by calling another function g that then calls f, etc.) and has no while-loops, instead having bounded for-loops. A bounded for-loop is one like "for i from 1 to r" where r is a variable that has already been computed earlier in the program; also, i cannot be modified within the for-loop. The point of such a programming language is that every program halts.
Most programs we write are actually primitive recursive (I mean, can be translated into such a language).
The short answer is yes, and such languages can even be extremely useful.
There was a discussion about it a few months ago on LtU:
http://lambda-the-ultimate.org/node/2846

Resources