Halting in non-Turing-complete languages - theory

The halting problem cannot be solved for Turing complete languages and it can be solved trivially for some non-TC languages like regexes where it always halts.
I was wondering if there are any languages that has both the ability to halt and not halt but admits an algorithm that can determine whether it halts.

The halting problem does not act on languages. Rather, it acts on machines
(i.e., programs): it asks whether a given program halts on a given input.
Perhaps you meant to ask whether it can be solved for other models of
computation (like regular expressions, which you mention, but also like
push-down automata).
Halting can, in general, be detected in models with finite resources (like
regular expressions or, equivalently, finite automata, which have a fixed
number of states and no external storage). This is easily accomplished by
enumerating all possible configurations and checking whether the machine enters
the same configuration twice (indicating an infinite loop); with finite
resources, we can put an upper bound on the amount of time before we must see
a repeated configuration if the machine does not halt.
Usually, models with infinite resources (unbounded TMs and PDAs, for instance),
cannot be halt-checked, but it would be best to investigate the models and
their open problems individually.
(Sorry for all the Wikipedia links, but it actually is a very good resource for
this kind of question.)

Yes. One important class of this kind are primitive recursive functions. This class includes all of the basic things you expect to be able to do with numbers (addition, multiplication, etc.), as well as some complex classes like #adrian has mentioned (regular expressions/finite automata, context-free grammars/pushdown automata). There do, however, exist functions that are not primitive recursive, such as the Ackermann function.
It's actually pretty easy to understand primitive recursive functions. They're the functions that you could get in a programming language that had no true recursion (so a function f cannot call itself, whether directly or by calling another function g that then calls f, etc.) and has no while-loops, instead having bounded for-loops. A bounded for-loop is one like "for i from 1 to r" where r is a variable that has already been computed earlier in the program; also, i cannot be modified within the for-loop. The point of such a programming language is that every program halts.
Most programs we write are actually primitive recursive (I mean, can be translated into such a language).

The short answer is yes, and such languages can even be extremely useful.
There was a discussion about it a few months ago on LtU:
http://lambda-the-ultimate.org/node/2846

Related

What is the meaning of 'construct' in programming languages

I see the term 'construct' come up very often in programming readings. The current book I am reading, "Programming in C" by Stephen Koching has used it a few times throughout the book. One example is in the chapter on looping, which says:
"When developing programs, it sometimes becomes desirable to have the
test made at the end of the loop rather than at the beginning.
Naturally, the C language provides a special language construct to
handle such a situation. This looping statement is known as the do
statement."
In this case what does the term 'construct' mean, and does the word 'construct' have any relation to an object 'constructor' in other languages?
In this case you can replace the word construct with syntax.
does the word 'construct' have an relation to an object 'constructor' in other languages?
No. These two terms are different. There is nothing like constructor in C
It's a generic term that normally refers to some particular syntax included in the language to perform some task (like a loop with the condition at the end). It has no relation at all with constructors.1
Well, besides the fact that constructors are a particular language construct in many OO languages.
does the word 'construct' have an relation to an object 'constructor' in other languages?
The sentence uses the noun, not a verb, meaning of the word "construct":
construct (n) - something (such as an idea or a theory) that is formed in people's minds.
In this case, "construct" refers to an abstract way of describing something (namely, a loop) in terms of the syntax of that particular language. "Language construct" means "a way to do [something] with that language".
A construct is simply a concept implementation mechanism used by a given programming language - the language's syntax.
In your case, the concept here is a loop and its construct is the manner in which it is implemented by the C programming language.
Programming languages provide constructs for various programming concepts that define how these programming concepts are implemented in that language.
Does the word 'construct' have an relation to an object 'constructor' in other languages?
The two terms are different, a constructor is used in Object Oriented Languages such as java, it is not available in the C programming language.
first of all, remember that c-language, not an Object Oriented Programming language. the constructor is an OOP terminology. So there is Construct refers to syntax and pre-defined keywords like do and while in this case.
The word “construct” as used in computer programming has very broad or general meaning. I hope these thoughts will help to explain what is meant by the word “construct” in programming.
A computer program is a list of instructions that the computer is able to (a) understand and (b) execute (perform or carry out). The simplest program would be a list of - let’s call them statements - that the computer would execute in sequence, one after the other, from the first to the last, and then end. But that would be incredibly limiting - so limiting in fact that I don’t think computers would ever have become much more than simple calculators. One of the fundamental differences between a simple calculator and a computer is that the statements do not have to be executed in sequence. The sequence can be interrupted by “special” instructions (statements) which can divert the flow of execution from one stream to a totally different stream which has a completely different agenda.
The first obvious way this is done is with methods (functions or procedures). When a method is called, the flow of execution is diverted from one stream of statements to a totally different stream of statements, often unrelated to the stream from which it came. If that concept is accepted, then I think that an instruction that calls a method could also be regarded as a “construct”.
Let’s divert this discussion for a moment to talk about “blocks” of code.
Programmers who work in languages like C, C++ or Java know that pairs of opening and closing braces (curly brackets), are used to identify blocks of code. And it’s blocks of code that divide a program up into different processes or procedures. A block of code that is headed by say a while() loop is just as valid as a method, in that it interrupts the otherwise unimpeded flow of execution through a program. The same applies to the many categories of operators. “new” will divert the flow of statement execution to a constructor method. So we have all these various syntactical expressions that have one thing in common - they divert the flow of execution which, left to its own devices - would happily proceed executing statements of code in sequence one after the other, without any interruption.
So I am suggesting that the word “construct” is a useful collective noun that embraces all of these different and diverse syntactical expressions e.g. if() for() switch() etc. that use the different categories of operators to perform functions that are defined in their respective blocks of code. Would love to hear other opinions.
In short, all in-built features(like an array, loop, if else statements) of the programming language are language constructs.

Is C an imperative or declarative programming language

It is quite confusing to know difference between Imperative and Declarative programming can any one explain difference between both in real world terms?
Kindly clarify whether C is an Imperative or Declarative Language?
C is an imperative programming language.
A one line difference between the two would be Declarative programming is when you say what you want, and imperative language is when you say how to get what you want. In Declarative programming the focus is on what the computer should do rather than how it should do it (ex. SQL) whereas in the Imperative programming the focus is on what steps the computer should take rather than what the computer will do (ex. C, C++, Java).
Imperative programming is a programming paradigm that describes computation in terms of statements that change a program state
Declarative programming is a programming paradigm, a style of building the structure and elements of computer programs, that expresses the logic of a computation without describing its control flow
Many imperative programming languages (such as Fortran, BASIC and C) are abstractions of assembly language.
The wiki says:-
As an imperative language, C uses statements to specify actions. The
most common statement is an expression statement, consisting of an
expression to be evaluated, followed by a semicolon; as a side effect
of the evaluation, functions may be called and variables may be
assigned new values. To modify the normal sequential execution of
statements, C provides several control-flow statements identified by
reserved keywords. Structured programming is supported by if(-else)
conditional execution and by do-while, while, and for iterative
execution (looping). The for statement has separate initialization,
testing, and reinitialization expressions, any or all of which can be
omitted. break and continue can be used to leave the innermost
enclosing loop statement or skip to its reinitialization. There is
also a non-structured goto statement which branches directly to the
designated label within the function. switch selects a case to be
executed based on the value of an integer expression.
Caveat
I am writing with a lot of generalities, so please bear with me.
In Theory
C is imperative, because code reads like a recipe for how to do something. However, if you use a lot of well-named functions and function pointers for polymorphism, it's possible to make C code look like a declarative language.
In imperative languages, you are focused on the algorithm/implementation. Engineering is inherently imperative, because you are focused on efficiency of a process: the cost of doing something in terms of time or money (or memory in CS) required.
In contrast, Mathematics is generally declarative (but writing a proof tends to be more imperative). In math, you care more about correctness and defining invariant relationships/operations, as opposed to how quickly you can get the answer.
Note that many functional languages tend to be declarative in nature (eg R, Lisp).
What does z = x + y mean? (Semantics)
In an imperative language, it means read from memory locations x and y, add those values together, and put the result into memory location z, and do it right now. If you assign a different value to x, you will have to use the z = x + y statement again to recalculate z.
In a declarative (lazy) language, it means z is a variable whose value is the sum of the values of two other variables x and y. The addition operation isn't executed until you try to read the value of z. What's the implication? If you read from z, the value will always be the sum of x and y at that moment in time; you do not need to reissue the statement. In pure declarative languages where there are no variables, a reissue can actually be caught as an error!!!
Keep this example in mind and you will see why mathematicians tend to prefer declarative languages. For example, I can define hypotenuse = sqrt( height^2 + length^2 ) and never worry about having to reissue that statement. The relationship is an invariant that will always hold, just like a mathematical truth always holds.
In Real Life (and why should I care?)
Proponents of declarative languages claim: an efficient solution that is wrong (buggy) is useless. They want bug-free, state-less functions without side-effects that can be reused without modification.
Proponents of imperative languages claim: a correct solution that takes forever to run is also useless. They want control over the memory/speed tradeoff. They want to be able to optimised based on physical and time constraints.
Of course, nothing is 100% imperative or declarative. Imperative code that is correct and well-written implies certain relationships. OTOH, declarative code, in sufficient depth and in conjunction with the language specifications, describes those relationships well enough for the compiler/interpreter to turn your code into a series of CPU instructions.
Because we are dealing with computers, a declarative compiler/interpreter must be smart enough to make time vs memory tradeoffs, whereas in an imperative language, it is up to the programmer to make those decisions more explicitly.
So a declarative language requires that the programmer focus on defining relationships between variables and other invariants. It is up to the compiler/interpreter to turn those relationships into a series of instructions/operations for the CPU. Most declarative compilers/interpreters are smart enough to handle most real-world cases, but may have trouble with edge cases. Unfortunately, in those situations you will have to coax the compiler/interpreter.
Which one is better?
Proponents of declarative languages claim that such languages allow the programmers to focus on the domain and to write code that reads easier for non-programmers. It is easier to write correct code, claim the advocates. However, the trade-off is, coaxing the compiler/interpreter to make the correct memory vs speed tradeoff can require some intricate knowledge of the language. You will understand this problem if you use a declarative language like R or SQL or LISP. It is certainly possible to define a new declarative language which has nothing to do with computers (but doing so may make it harder for the writer of the interpreter/compiler). Many mathematicians and pure CS researchers like declarative languages.
Imperative languages tend to give you finer grained control over the machine. There is no question that you are programming a computer. The trap is, we can end up pre-maturely focusing on unnecessary speed optimisations that hurt code maintenance and readability. In the early days of computing where speed or memory were severely limited, you needed to have imperative languages to get useful work done, optimised correctly for your situation. Engineers and tinkerers tend to gravitate towards imperative languages.
C is an imperative language.
An imperative language specifies how to do what you want. A declarative language specifies what you want, but not how to do it; the language works out how to do it. Prolog is an example of a declarative language.
I would like to comment that some aspects of the C language would be, in the absence of explicit rules, declarative...
int i = 4;
int j = 5;
float f = i/j;
would seem to mean that you intend float to be .80 (and in a declarative language it would be, most likely)... but since there are well defined procedures int/int evaluates to an int using integer division (which in C is floor division).
it is the aspect of the explicitly defined behavior that makes C Imperative.
there is the secret under layer of C where optimizations can be made, as long as they have a guarantee to not change the output of the program, that makes the compiler have some declarative behavior, where the declaration is the behavior of the input C program, but the end result can be anything that matches that C program in functionality
§5.1.2.3 part 10:
Alternatively, an implementation might perform various optimizations
within each translation unit, such that the actual semantics would
agree with the abstract semantics only when making function calls
across translation unit boundaries. In such an implementation, at the
time of each function entry and function return where the calling
function and the called function are in different translation units,
the values of all externally linked objects and of all objects
accessible via pointers therein would agree with the abstract
semantics. Furthermore, at the time of each such function entry the
values of the parameters of the called function and of all objects
accessible via pointers therein would agree with the abstract
semantics. In this type of implementation, objects referred to by
interrupt service routines activated by the signal function would
require explicit specification of volatile storage, as well as other
implementation-defined restrictions.
and a concrete example from the next part:
EXAMPLE 2 In executing the fragment
char c1, c2; /* ... */
c1 = c1 + c2;
the ‘‘integer promotions’’ require that the abstract machine promote
the value of each variable to int size and then add
the two ints and truncate the sum. Provided the addition of two chars
can be done without overflow, or with overflow wrapping silently to
produce the correct result, the actual execution need only produce the
same result, possibly omitting the promotions.
-> Imperative programming: telling the "machine" how to do something, and as a result what you want to happen will happen.
-> Declarative programming: telling the "machine" what you would like to happen, and let the computer figure out how to do it.
So We can say C is an imperative Language.

Is it okay to use functions to stay organized in C?

I'm a relatively new C programmer, and I've noticed that many conventions from other higher-level OOP languages don't exactly hold true on C.
Is it okay to use short functions to have your coding stay organized (even though it will likely be called only once)? An example of this would be 10-15 lines in something like void init_file(void), then calling it first in main().
I would have to say, not only is it OK, but it's generally encouraged. Just don't overly fragment the train of thought by creating myriads of tiny functions. Try to ensure that each function performs a single cohesive, well... function, with a clean interface (too many parameters can be a hint that the function is performing work which is not sufficiently separate from it's caller).
Furthermore, well-named functions can serve to replace comments that would otherwise be needed. As well as providing re-use, functions can also (or instead) provide a means to organize the code and break it down into smaller units which can be more readily understood. Using functions in this way is very much like creating packages and classes/modules, though at a more fine-grained level.
Yes. Please. Don't write long functions. Write short ones that do one thing and do it well. The fact that they may only be called once is fine. One benefit is that if you name your function well, you can avoid writing comments that will get out of sync with the code over time.
If I can take the liberty to do some quoting from Code Complete:
(These reason details have been abbreviated and in spots paraphrased, for the full explanation see the complete text.)
Valid Reasons to Create a Routine
Note the reasons overlap and are not intended to be independent of each other.
Reduce complexity - The single most important reason to create a routine is to reduce a program's complexity (hide away details so you don't need to think about them).
Introduce an intermediate, understandable abstraction - Putting a section of code int o a well-named routine is one of the best ways to document its purpose.
Avoid duplicate code - The most popular reason for creating a routine. Saves space and is easier to maintain (only have to check and/or modify one place).
Hide sequences - It's a good idea to hide the order in which events happen to be processed.
Hide pointer operations - Pointer operations tend to be hard to read and error prone. Isolating them into routines shifts focus to the intent of the operation instead of the mechanics of pointer manipulation.
Improve portability - Use routines to isolate nonportable capabilities.
Simplify complicated boolean tests - Putting complicated boolean tests into a function makes the code more readable because the details of the test are out of the way and a descriptive function name summarizes the purpose of the tests.
Improve performance - You can optimize the code in one place instead of several.
To ensure all routines are small? - No. With so many good reasons for putting code into a routine, this one is unnecessary. (This is the one thrown into the list to make sure you are paying attention!)
And one final quote from the text (Chapter 7: High-Quality Routines)
One of the strongest mental blocks to
creating effective routines is a
reluctance to create a simple routine
for a simple purpose. Constructing a
whole routine to contain two or three
lines of code might seem like
overkill, but experience shows how
helpful a good small routine can be.
If a group of statements can be thought of as a thing - then make them a function
i think it is more than OK, I would recommend it! short easy to prove correct functions with well thought out names lead to code which is more self documenting than long complex functions.
Any compiler worth using will be able to inline these calls to generate efficient code if needed.
Functions are absolutely necessary to stay organized. You need to first design the problem, and then depending on the different functionality you need to split them into functions. Some segment of code which is used multiple times, probably needs to be written in a function.
I think first thinking about what problem you have in hand, break down the components and for each component try writing a function. When writing the function see if there are some code segment doing the same thing, then break it into a sub function, or if there is a sub module then it is also a candidate for another function. But at some time this breaking job should stop, and it depends on you. Generally, do not make many too big functions and not many too small functions.
When construction the function please consider the design to have high cohesion and low coupling.
EDIT1::
you might want to also consider separate modules. For example if you need to use a stack or queue for some application. Make it separate modules whose functions could be called from other functions. This way you can save re-coding commonly used modules by programming them as a group of functions stored separately.
Yes
I follow a few guidelines:
DRY (aka DIE)
Keep Cyclomatic Complexity low
Functions should fit in a Terminal window
Each one of these principles at some point will require that a function be broken up, although I suppose #2 could imply that two functions with straight-line code should be combined. It's somewhat more common to do what is called method extraction than actually splitting a function into a top and bottom half, because the usual reason is to extract common code to be called more than once.
#1 is quite useful as a decision aid. It's the same thing as saying, as I do, "never copy code".
#2 gives you a good reason to break up a function even if there is no repeated code. If the decision logic passes a certain complexity threshold, we break it up into more functions that make fewer decisions.
It is indeed a good practice to refactor code into functions, irrespective of the language being used. Even if your code is short, it will make it more readable.
If your function is quite short, you can consider inlining it.
IBM Publib article on inlining

How to call a structured language that cannot loop or a functional language that cannot return

I created a special-purpose "programming language" that deliberately (by design) cannot evaluate the same piece of code twice (ie. it cannot loop). It essentially is made to describe a flowchart-like process where each element in the flowchart is a conditional that performs a different test on the same set of data (without being able to modify it). Branches can split and merge, but never in a circular fashion, ie. the flowchart cannot loop back onto itself. When arriving at the end of a branch, the current state is returned and the program exits.
When written down, a typical program superficially resembles a program in a purely functional language, except that no form of recursion is allowed and functions can never return anything; the only way to exit a function is to call another function, or to invoke a general exit statement that returns the current state. A similar effect could also be achieved by taking a structured programming language and removing all loop statements, or by taking an "unstructured" programming language and forbidding any goto or jmp statement that goes backwards in the code.
Now my question is: is there a concise and accurate way to describe such a language? I don't have any formal CS background and it is difficult for me to understand articles about automata theory and formal language theory, so I'm a bit at a loss. I know my language is not Turing complete, and through great pain, I managed to assure myself that my language probably can be classified as a "regular language" (ie. a language that can be evaluated by a read-only Turing machine), but is there a more specific term?
Bonus points if the term is intuitively understandable to an audience that is well-versed in general programming concepts but doesn't have a formal CS background. Also bonus points if there is a specific kind of machine or automaton that evaluates such a language. Oh yeah, keep in mind that we're not evaluating a stream of data - every element has (read-only) access to the full set of input data. :)
I believe that your language is sufficiently powerful to encode precisely the star-free languages. This is a subset of that regular languages in which no expression contains a Kleene star. In other words, it's the language of the empty string, the null set, and individual characters that is closed under concatenation and disjunction. This is equivalent to the set of languages accepted by DFAs that don't have any directed cycles in them.
I can attempt a proof of this here given your description of your language, though I'm not sure it will work precisely correctly because I don't have full access to your language. The assumptions I'm making are as follows:
No functions ever return. Once a function is called, it will never return control flow to the caller.
All calls are resolved statically (that is, you can look at the source code and construct a graph of each function and the set of functions it calls). In other words, there aren't any function pointers.
The call graph is acyclic; for any functions A and B, then exactly one of the following holds: A transitively calls B, B transitively calls A, or neither A nor B transitively call one another.
More generally, the control flow graph is acyclic. Once an expression evaluates, it never evaluates again. This allows us to generalize the above so that instead of thinking of functions calling other functions, we can think of the program as a series of statements that all call one another as a DAG.
Your input is a string where each letter is scanned once and only once, and in the order in which it's given (which seems reasonable given the fact that you're trying to model flowcharts).
Given these assumptions, here's a proof that your programs accept a language iff that language is star-free.
To prove that if there's a star-free language, there's a program in your language that accepts it, begin by constructing the minimum-state DFA for that language. Star-free languages are loop-free and scan the input exactly once, and so it should be easy to build a program in your language from the DFA. In particular, given a state s with a set of transitions to other states based on the next symbol of input, you can write a function that
looks at the next character of input and then calls the function encoding the state being transitioned to. Since the DFA has no directed cycles, the function calls have no directed cycles, and so each statement will be executed exactly once. We now have that (∀ R. is a star-free language → ∃ a program in your language that accepts it).
To prove the reverse direction of implication, we essentially reverse this construction and create an ε-NFA with no cycles that corresponds to your program. Doing a subset construction on this NFA to reduce it to a DFA will not introduce any cycles, and so you'll have a star-free language. The construction is as follows: for each statement si in your program, create a state qi with a transition to each of the states corresponding to the other statements in your program that are one hop away from that statement. The transitions to those states will be labeled with the symbols of input consumed making each of the decisions, or ε if the transition occurs without consuming any input. This shows that (∀ programs P in your language, &exists; a star-free language R the accepts just the strings accepted by your language).
Taken together, this shows that your programs have identically the power of the star-free languages.
Of course, the assumptions I made on what your programs can do might be too limited. You might have random-access to the input sequence, which I think can be handled with a modification of the above construction. If you can potentially have cycles in execution, then this whole construction breaks. But, even if I'm wrong, I still had a lot of fun thinking about this, and thank you for an enjoyable evening. :-)
Hope this helps!
I know this question is somewhat old, but for posterity, the phrase you are looking for is "decision tree". See http://en.wikipedia.org/wiki/Decision_tree_model for details. I believe this captures exactly what you have done and has a pretty descriptive name to boot!

What is the use of finite automata? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the use of finite automata? And all the concepts that we study in the theory of computation. I've never seen their uses yet.
They are the theoretical underpinnings of concepts widely used in computer science and programming, and understanding them helps you better understand how to use them (and what their limits are). The three basic ones you should encounter are, in increasing order of power:
Finite automata, which are equivalent to regular expressions. Regular expressions are widely used in programming for matching strings and extracting text. They are a simple method of describing a set of valid strings using basic characters, grouping, and repitition. They can do a lot, but they can't match balanced sets of parentheses.
Push-down automata, equivalent to context-free grammars. Text/input parsers and compilers use these when regular expressions aren't powerful enough (and one of the things you learn in studying finite automata is what regular expressions can't do, which is crucial to knowing when to write a regular expression and when to use something more complicated). Context-free grammars can describe "languages" (sets of valid strings) where the validity at a certain point in parsing the string does not depend on what else has been seen.
Turing machines, equivalent to general computation (anything you can do with a computer). Some of the things you learn when you cover these enable you to understand the limits of computing itself. A good theory course will teach you about the Halting Problem, which enables you to identify problems for which it is impossible to write a program. Once you've identified such a problem, then you know to stop trying (or refine it to something that is possible).
Understanding the theory and limitations of these various computing mechanisms enable you to better understand problems and programs and think more deeply about programming.
There was a request-for-work published about a year ago on one of the freelance coding exchange sites asking, essentially, for a program which solved the Halting Problem. Several people responded with offers, saying they "understood the requirements" and could "start immediately". It was impossible to write a program which met the requirements. Understanding computing theory enables you to not be that bidder who demonstrates, in public, that he really doesn't understand computing (and doesn't bother to thoroughly investigate a problem before declaring understanding and making an offer).
Finite automata are very useful for communication protocols and for matching strings against regular expressions.
Automatons are used in hardware and software applications. Please read the implementation section here http://en.wikipedia.org/wiki/Finite-state_machine#Implementation
There is also a notion of Automata-based programming. Please check this http://en.wikipedia.org/wiki/Automata-based_programming
cheers
Every GUI, every workflow can be treated as a finite automata. Think of each page as a state and transitions occurring due to certain events. Perhaps you can't proceed to a certain page or the next stage of the workflow until a series of conditions are met.
Finite automata are e.g. used to parse formal languages. This means that finite automata are very usefull in the creation of compiler and interpreter techniques.
Historicaly, the finite state machine showed that a lot of problems can be solved by a very simple automate.
Try taking a compilers course. You will very likely make a compiler or interpreter using a finite state automaton to implement a recursive descent parser.
For example to manage states of some objects with defined life cycle.
As example of this: orders in book shop.
An order can have the following states:
-ordered
-payed
-shipping
-done
and program of the finite automata knows how one state can be changed by other.
The finite automata is a type of state machine (SM). In general SMs are used for parsing formal languages.
You can use as a formal language many entities, not only characters.
And regular language is a type of formal language.
There are some theory that show, what type of the SM is better to parse a regular language:
http://en.wikipedia.org/wiki/Regular_language

Resources