Pre-requisites for learning lambda calculus - theory

Can anyone tell me what are the pre-requisites to learning lambda calculus (if any)?

That really depends on what you want to do with the lambda calculus. If you want to learn it just to see how it works there really aren't any prerequisites; it's pretty self-contained. However, if you want to understand any of the proofs about it (Turing-completeness, Church numerals, normalization, etc.) you might need more math prereqs. In particular, I'd suggest a background in inductive proof techniques, especially structural induction. It also might be nice to know a little about either the halting problem or some sort of incompleteness theorem, since some of the fun results with lambda calculus involve non-computability.

There are no prerequisites for understanding the Lambda Calculus itself. If you are not a computer scientist and don't even know recursion, you can learn the basics of (untyped) Lambda Calculus informally in about 30 minutes here: http://palmstroem.blogspot.de/2012/05/lambda-calculus-for-absolute-dummies.html
This should give you a working intuition about what it does and how it works.
If you are familiar with basic mathematical notations and recursive definitions, you can go for a standard introduction. Especially, if you want to learn about the Lambda Calculus as a basis for Haskell, you should delve into the depths of the typed Lambda Calculus: http://www.cse.chalmers.se/research/group/logic/TypesSS05/Extra/geuvers.pdf

Related

Can the shunting yard algorithm parse POSIX regular expressions?

At first glance, the shunting yard algorithm seems applicable to POSIX regular expression parsing, but since I don't have much experience (or theoretical background) in writing parsers, I'd like to ask SO before jumping in and writing something only to get stuck halfway.
Perhaps a more sophisticated version of the question is: What is a good formal statement of the class of problems the shunting yard algorithm can be applied to?
Clarification: This question is about whether you can parse POSIX re syntax into an abstract syntax tree using the basic principles of the shunting algorithm, not whether you can use regular expressions to implement the shunting algorithm. Sorry I wasn't clear enough stating that to begin with!
I'm fairly sure it can. If you look at Henry Spencer's regular expression package:
regexp.shar.Z
which was the basis for Perl's regular expressions, you will notice that he describes the program as being in "railroad normal form".
I reckon you'd have some problems because different characters have different meanings in different contexts e.g.
^[^a-z][asd-]
The ^ has two different meanings and so does the -. I think I'd choose a recursive descent parser.
I don't see why it wouldn't be suitable. Looking at some old code, it does seem I used a completely different parsing strategy for my last regexp parser, however (essentially, a walk-through from the start, building the resulting automaton representation as you go, with some look-ahead and recursive calls to implement grouping of regular expressions).
I will say that the answer to your question is "no, you cannot implement the shunting yard algorithm using a regular expression." This is for the same reason you cannot parse arbitrary HTML using regular expressions. Which boils down to this:
Regular expressions do not have a stack. Because the shunting yard algorithm relies on a stack (to push and pop operands as you convert from infix to RPN), then regular expressions do not have the computational "power" to perform this task.
This glosses over many details, but a "regular expression" is one way to define a regular language. When you "use" a regular expression, you are asking the computer to say: "Look at a body of text and tell me whether or not any of those strings are in my language. The language that I defined using a regular expression." I'll point to this most excellent answer which you and everyone reading this should upvote for more on regular languages.
So now you need some mathematical concept to augment "regular languages" in order to create more powerful languages. If you were to characterize the shunting yard algorithm as an realization of a model of computational power, then you might say that the algorithm would be described as a context-free grammar (hey what do you know, that link uses an expression parse tree as an example.) A push-down automata. Something with a stack.
If you are less-than-familiar with automata theory and complexity classes, then those wikipedia articles are probably not that helpful without explaining them from the ground up.
The point being, you may be able to use regex to help writing shunting yard. But regex are not very good at doing operations that have an arbitrary depth, which this problem has. So I would not spend too much time going down the regex avenue for this problem.

Writing expressions: Infix, Postfix and Prefix

My task is to write an app(unfortunatly on C) which reads expression in infix notation(with variables, unary and binary operators) and store it in memory, then evaluate it. Also, checks for correctness should be performed.
for example:
3*(A+B)-(-2-78)*2+(0*A)
After I got all values, program should calculate it.
The question is:
What is the best way to do this?(with optimization and validation)
What notation to choice as the base of tree?
Should I represent expression as tree? If so I can easily optimize it(just drop nodes which returns 0 or smth else).
Cheers,
The link suggested in the comment by Greg Hewgill above contains all the info you'll need:
If you insist on writing your own,
a recursive descent parser is probably the simplest way to do it by hand.
Otherwise you could use a tool like Bison (since you're working in C). This tutorial is the best I've seen for working with Flex and Bison (or Lex/Yacc)
You can also search for "expression evaluator" on Codeproject - they have a lot of articles on the topic.
I came across the M4 program's expression evaluator some time ago. You can study its code to see how it works. I think this link on Google Codesearch is the version I saw.
Your question hints at requirements being put on your solution:
unfortunatly on C
so some suggestions here might not be permissible. Nevertheless, I would suggest that this is quite a complicated problem to solve, and that you would be much better off trying to find a suitable existing library which you could link into your C code to do this for you. This would likely reduce the time and effort required to get the code working, and reduce the ongoing maintenance effort. Of course, you'd have to think about licensing, but I'd be surprised if there wasn't a good parsing/evaluation library "out there" which could do a good job of this.

Can a language be Turing-complete without any support for arrays?

If a language has control structures and variables, but no support for arrays, lists, memory access and allocation, etc, can it be Turing-complete?
Maybe if there was no limit to the amount of variables you can create, you can simulate arrays by creating variables like array_1, array_2, ... array_6000 and manually loop through them, and somehow create complex data structures and recursion?
Edit: Even if you cannot access variables by name manipulation (array_10+i is not allowed)?
Certainly. Have a look at Lambda Calculus, which is one of the most minimal Turing Complete languages I've ever seen. Basically, all you have are lambdas (function literals); no assignment, no declaration, no data structures. It's all very very slimmed-down.
You can, however, simulate a linear data structure like a List by chaining functions together. It gets pretty verbose, but it's certainly possible and it's much nicer than having a large series of sequentially named variables.
Generally speaking, whether or not a language is Turing Complete has nothing to do with whether it has Arrays. Functional languages like SML and Haskell lack arrays, just like Lambda Calculus, and these are actually useful languages! Saying a language is "Turing Complete" is merely another way of saying that there is no Turing Computable function which cannot be expressed in said language. This is a surprisingly loose qualification, allowing many languages which would be completely impractical (like Lambda Calculus).
There's plenty of Turing-complete languages that don't even have the notion of a "variable"! Memory access and allocations are implementation details, so they're completely irrelevant. You have to realize that Turing machines and Turing completeness are very theoretical concepts, useful for proving things, but completely divorced from the reality of actual hardware.
Paul Graham has written a long, but very, very interesting essay on the history of computer languages where he describes the two very different main traditions of computer languages:
Lisp, Scheme, etc. - derived from theoretical considerations, very simple, yet conceptually powerful languages, but for the longest time impractical because of their complete disregard for what's easy and efficient to implement
Assembler, FORTRAN, C and pretty much all "mainstream" languages - derived more or less directly from what the hardware could do, easy to implement, efficient, but for the longest time inferior to the (older!) Lisp family in terms of expressiveness.
It sounds like you know only the second tradition, but Turing completeness is a concept that originates from the same principles as the first tradition and makes little sense if you don't know those principles.

Theory of computation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
What is the use and importance of studying theory of computation?
I had course on the same subject during graduation, but I did not study it seriously.
I also found the following link where some video lectures are available:
Theory of Computation
Shai Simonson's classes are really very good. I have listened to them. As he says in the initial lecture, 'Theory of Computation' is a study of abstract concepts. But these abstract concepts are really very important to better understanding of the field of computing, as most of the concepts we deal with have lot of abstract and logical under pinnings.
As John Saunders said, you can become a programmer, even a good one if you know the programming language well. But the knowledge of what is going underneath will always makes you an enlightened one. So go ahead and learn it again (NB: I understand why you didn't study it seriously at college. Most of the teachers in our colleges aren't that good at explaining this topic (I too had a lousy teacher), but I assure you the teacher here is the best you can get.
I think every computer science student should know some of computation theory, even you won't do any research.
Some concepts are just universal and you will encounter them again and again in other courses. E.g. finite state machines, you need to know them when you are learning string matching algorithms, and compilers. Another example, you will learn some reduction algorithms (transforming from one model to another model) in computation theory, these things teach you how to think abstractly and algorithmically.
The greatest of all human faculties is the power of abstraction. That is what separates us from the animals. The more we exercise this power, the more successful we are in solving problems.
Playing chess may seem a futile pastime to some and never of practical use to any, but it goes a long way to give the player the ability to think ahead every time an important decision is to be made.
Besides, it reveals the elegance and simplicity that is hidden beneath layers of ugly syntax and brain-dead code we sift through every day just to make a living.
In addition to the usefulness of various tools (regular expressions, context free grammars, state machines etc.) in your daily life as a programmer, a good theoretical computer science course will have taught you how to model certain problems in a way that you can tackle effectively.
Solutions that seem clever to people without training in this discipline will seem natural and "the right way" to people who have. I recommend that you pay close attention to what's going on in your course since it will give you a very powerful toolset that will help you as a programmer and as an abstract thinker.
The importance of computation theory will depend on what you do with your life. If you want to be a computer scientist, then it is an important basis for your future studies.
If you just want to be a programmer or software engineer, then you will probably never use the knowledge again.
Theory of computation is sort of a hinge point among computer science, linguistics, and mathematics. If you have intellectual curiosity, then expose yourself to the underlying theory. If you just want to dip lightly into making computers do certain things, you can probably skip it. Me? I loved it. But I also liked topology, so I may not be a typical developer in that respect.
It’s really not without its practical aspects in regards to software engineering.
For example, you may be tempted to parse some programming language as input to your program with regular expressions.
Computer science theory proves why this is a bad idea (most programming language syntaxes are not regular), and it can never be overcome no matter how much you'd like to try.
Other examples may include NPC problems, etc.
Basically, computer science theory can teach you many important things with regards to reasoning. But it also describes the fundamental limits to programming and algorithms.
"Know your limits"
Some practical examples:
Before spending a lot of time on a problem, you'll want to know:
If the problem can't be solved.
If there is a "good" (polynomial) solution, as some problems may not have good" solutions (or at least, not ones we currently know of ;))
(A bit less practical) you'll want to know if a problem is "harder" than another, that is, takes more time/space.
Every computer science engineering has to learn theory of computation as it plays a vital role...The theory of computation forms the basis for:
Writing efficient algorithms that run in computing devices.
Programming language research and their development.
Efficient compiler design and construction.
Theory of computation studies the basic primitives necessary to handle computable problems. What counts as "computable" is tacitly understood to be von Neumann machine-style processing and distinct from a Lisp machine. (The Church–Turing thesis says these are ultimately equal, but, in practice they create two very different models of computation.)
For example, here's probably a minimal set to implement basic Turing functionality in a von Neumann-style machine:
MOVE data
AND, OR, and NOT transforms
a COMPARE operator
JUMPIFEQUAL function
To get universal Turing functionality, you'll have to probably have memory addressing and a call stack.
As Man's mind gets more complex, the target for what counts as "computable" gets higher and higher. There probably is no upper bound.

Learn C first before learning Objective-C [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Being an aspiring Apple developer, I want to get the opinions of the community if it is better to learn C first before moving into Objective-C and ultimately the Cocoa Framework?
My gut says learn C, which will give me a good foundation.
I would learn C first. I learned C (and did a lot in C) before moving to Obj-C. I have many colleagues who never were real C programmers, they started with Obj-C and learned only as much C as necessary.
Every now and then I see how they solve a problem entirely in Obj-C, sometimes resulting in a very clumsy solutions. Usually I then replace some Obj-C code with pure C code (after all you can mix them as much as you like, the content of an Obj-C method can be entirely, pure C code). Without any intention to insult any Obj-C programmer, there are solutions that are very elegant in Obj-C, these are solutions that just work (and look) a lot better thanks to objects (OOP programming can make complex programs much more lovely than functional programming; polymorphism for example is a brilliant feature)... and I really like Obj-C (much more than C++! I hate the C++ syntax and some language features are plain overkill and lead to bad development patterns IMHO); however, when I sometimes re-write Obj-C code of my colleagues (and I really only do so, if I think this is absolutely necessary), the resulting code is usually 50% smaller, needs only 25% of the memory it used before and is about 400% faster at runtime.
What I'm trying to say here: Every language has its pros and cons. C has pros and cons and so does Obj-C. However, the really great feature of Obj-C (that's why I even like it more than Java) is that you can jump to plain C at will and back again. Why this is such a great feature? Because just like Obj-C fixes many of the cons of pure C, pure C can fix some of the cons of Obj-C. If you mix them together you'll receive a very powerful team.
If you only learn Obj-C and have no idea of C or only know the very basics of it and never tried how elegantly it can solve some common problems, you actually learned only half of Obj-C. C is a fundamental part of Obj-C. The ability to use C at any time and everywhere is a fundamental feature of it.
A typical example was some code we used that had to encode data in base64, but we could not use an external library for that (no OpenSSL lib). We used a base64 encoder, entirely written using Cocoa classes. It was working okay, but when we made it encode 200 MB of binary data, it took an eternity and the memory overhead was unacceptable. I replaced it with a tiny, ultra compact base64 encoder written entirely as one C function (I copied the function body into the method body, method took NSData as input and returned NSString as output, however inside the function everything was C). The C encoder was so much more compact, it beat the pure Cocoa encoder by the factor 8 in speed and the memory overhead was also much less. Encoding/Decoding data, playing around with bits and similar low level tasks are just the strong points of C.
Another example was some UI code that drew a lot of graphs. For storing the data necessary to paint the graphs, we used NSArray's. Actually NSMutableArray's, since the graph was animated. Result: Very slow graph animation. We replaced all NSArray's with normal C arrays, objects with structs (after all graph coordinate information is nothing you must have in objects), enumerator access with simple for loops and started moving data between the arrays with memcopy instead of taking data from one array to the other one, index for index. The result: A speed up by the factor 4. The graph animated smoothly, even on older PPC systems.
The weakness of C is that every more complex program gets ugly in the long run. Keeping C applications readable, extensible and manageable demands a lot of discipline of a programmer. Many projects fail because this discipline is missing. Obj-C makes it easy for you to structure your application using classes, inheritance, protocols and so on. That said, I would not use pure C functionality across the borders of a method unless necessary. I prefer to keep all code in an Objective-C app within the method of an object; everything else defeats the purpose of an OO application. However within the method I sometimes use pure C exclusively.
You can readily enough learn C and Objective-C at the same time -- there's certainly no need to learn the minutiae of C (including pointer arithmetic and so on) before starting with Objective-C's additions to the language, and as a novice programmer getting underway with Objective-C quickly may help you to start "thinking in objects" more quickly.
In terms of available resources, Apple's documentation does typically assume familiarity with C, so starting with The Objective-C 2.0 Programming Language won't be of much benefit to you. I would invest in a copy of Programming in Objective-C by Stephen Kochan (depending on how quickly you want to get underway, you may consider waiting for the second edition):
Programming Objective-C Developers Library
Programming Objective-C 2.0 Developers Library
It assumes no prior experience, and teaches you Objective-C and as much C as you need.
If you're feeling a little ambitious, you might start with Scott Stevenson's "Learn C" Tutorial, but it does have some prerequisites ("You should already know at least one scripting or programming language, including functions, variables and loops. You'll also need to type commands into the Mac OS X Terminal.").
(Just for the record and for context: I learned both at the same time back in 1991 -- it didn't seem to do me any harm. I did, though, have a background in BASIC, Pascal, Logo, and LISP.)
I thought a lot about this issue before writing my book on Objective-C. First, I really believe that learning the C language before learning Objective-C is the wrong path. C is a procedural language containing many features that are not necessary for programming in Objective-C, especially at the novice level. In fact, resorting to some of these features goes against the grain of adhering to a good object-oriented programming methodology. It’s also not a good idea to teach all the details of a procedural language (and attacking a problem's solution with functions and structured programming techniques) before learning an object-oriented one. This can start the programmer off in the wrong direction potentially leading to developing the wrong orientation and mindset for fostering a good object-oriented programming discipline. Just because Objective-C is an extension to the C language doesn’t mean you have to learn C first!
I think that teaching Objective-C and the underlying C language as a single integrated language is the right approach. There's no reason to learn that a "for" statement is from the C language and not from its superset Objective-C language. Further, why learn in detail about things like C arrays and strings (and manipulating them) before learning about array (NSArray) and string objects (NSString), for example? Many C texts devote a lot of time to structures, and pointers to structures, and iterating through arrays with pointers. But you can start writing Objective-C programs without knowing about any of these C language features. And for a novice programmer, that's a big deal. Not only does that shorten the learning curve but also reduces the amount of material that has to be learned (and some of it selectively filtered out) to writing Objective-C programs.
I do agree that you will want to learn most, if not all, of the underlying C features, but many can be deferred until a solid grasp of defining classes and methods, working with objects and message expressions, and understanding the concepts of inheritance and polymorphism are well-understood.
I'd dive right in with Objective C - if you've already got a few languages under your belt, it's not the syntax which is the learning curve, it's Cocoa.
I think that, for the most part, learning C is a good idea no matter what arena you're going in to, at least to get the hang of the inner workings of software development before using prepackaged goods, that way if something goes wrong you have a better chance of understanding the inner workings. There's plenty of discussion about this on SO, and it's a rather subjective question, but in general you will inherently be using C within your Objective-C code, so I guess it's really up to you. I'm a ground up kind of person, but sometimes it can get in the way and I know several smart people who worked their way from the top down, I think the important part is that you get to understanding the inner workings as it will set your capabilities apart from those who don't as well as increase your capabilities.
It's a good idea to learn C before learning Objective-C, which is a strict superset of C. This means that Objective-C can support all normal C code, so the code common to C programs is bound to show up even in Objective-C code.
In addition to looking at things purely from a language point of view, you will find that Mac OS X is a complete Unix operating system. All the system level libraries are written in C.
It is probably possible to learn both at the same time, but I think you will appreciate and understand Objective-C more if you have a solid working knowledge of C first.
I'd learn Objective-C and learn as much C as you need as you go along.
The areas of C that you won't depend on much:
Pointer arithmetic and arrays. I haven't used C arrays at all.
C strings. Objective-C's strings do the job nicer and safer.
Manual memory management if you use GC in Obj-C 2.1. I highly recommend this route for development speed and performance reasons.
As you learn Objective-C and Cocoa, you cannot avoid learning bits of C. For example, rectangles are common represented by CGRect, a C struct.
If you have time, by all means learn C. As others have said here, Kochan's book (second and first editions) is excellent as a book to dip into.
There are a lot of things you can't do purely in Objective-C, so learning some basic C skills will be pretty critical. You'll at least need to understand variable declarations and the basic C library functions, or you'll be frustrated.
Honestly, so many languages are based on the C syntax that it's a good thing to be familiar with. I'd take a week or two to familiarize yourself with C regardless.
That said, I did just teach myself Objective C, and I have to be honest: I didn't find my C experience to be as useful as I would have thought. Objective C was definitely eye-opening for me.
You can jump directly into Objective-C, with the following benefits:
You'll learn "some" C in the way.
You'll learn the C parts that are relevant for you .
At least for me is easier to learn a new language when I'm interested in some specific app or sample, and I fail when I have to learn other thing that is not exactly what I'm interested on.
You can always refine your C knowledge later if you get interested in lower level programming.
Better, I don't know, even less as I am not familiar with Objective-C.
But bases of C aren't so hard to learn, it isn't a very complex language (in terms of syntax, not in terms of mastering!), so go for it, it won't be wasted time.
Personally, I think it is always a good idea to learn C, it gives a good insight of how computer works. After all, most languages and systems are still written in C. Then move on! :-)
PS.: By "move on", I didn't mean "drop it", just "learn more, learn different". Once you know C, you might never drop it: Java uses JNI to call C routines for low level stuff, Python, Lua, etc. are often extended with C code (Lua reference even just assumes some C knowledge for some functions which are just a thin wrapper to the C function behind), and so on.
Yes, learning C language before any other advanced langauges will help you to learn quiclky other langauges.
According to Wikipedia, Objective-C is a strict super-set of C. This being the case, I would suggest learning C first. Then when you learn Objective-C it will be clear what parts are added as part of Objective-C.
C gives you very little abstraction from assembly. Some C Compilers will even let you inline assembly. This can be very useful for thinking about how the computer works, which is important to know.
That being said, if you're really interested in Object-C don't let yourself get stuck writing something in C just because its "good for you". You don't need to frustrate yourself while you're trying to learn a new skill set. It is important that you have fun with what you're doing.
Do you want to be a hard-core developer? Then learn c first.
The books you need to completely master c are some of the best writings in technology. Here's what you need:
C Programming Language
The Standard C Library
Objective C is sufficiently different from C as to not merit learning C first.
From a syntax / language-family perspective one is almost better off studying SmallTalk (on which objective C is based)
From a practical perspective, focus your efforts on learning one language at a time.
Later, if you wish to learn another language, C++, Java and Python are 1) easy to learn as a bunch 2) popular and thus marketable 3) powerful.
You should have a basic knowledge of C before starting Objective_C, but there's no need master every detail of C.
I've published my notes after reading "Programming in Objective-C" in case it helps someone else.
learn objective c with programming-
Depending on many languages you already know it may be a better idea to just start learning Objective-C. The foundation in most languages are basically the same, it's the syntax that is different. Learning C first isn't really going to make much of a difference when it comes to learning Objective-C.
I learned Objective-C straight away and it worked fine for about a year now, I just had some difficulty reading C code when I downloaded project to see how they work, but now I really feel the need to learn C. You can try learning ObjC without C, but sooner or later, you will need C.
IMHO one should first learn at least some C and especially about pointers. That’s even more important if one comes from a language that doesn’t have pointers. A lot of people ask about code like
NSString *string = [[NSString alloc] init];
string = #"something";
since they don’t know about the distinction between a pointer and the object it points to.
Of course one doesn’t have to learn all of C before one can start with Objective-C, but some fundamental things are absolutely necessary.
Heck no, go straight to objective C!
I moved from ActionScript 3 to Objective C, and I already have an intern at a company!
Do what you want.
If you learn some other language prior, then you will always have confusion in writing right syntax. I do not know purpose, but Object C uses weird (not common) syntax for calling object methods. It names it as sending messages, yes, it is true accordingly pure Object Oriented concept, however most Object oriented languages name that as calling method and use more traditional syntax of calling methods. Garbage collection is also something very odd, Object C is based on old school reference count. So you will have difficulties to accept it if you switch from other language.
I am writing a book Object C quick migration guide for C/C++ programmers hoping to help people to pickup all differences quicker.

Resources