Is two-finger non-homerow touch-typing for programming acceptable? [closed] - typing

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm currently typing about 90 wpm (from http://speedtest.10-fast-fingers.com/ 90 correct 0 missed) using two fingers and the occasional ring or index. This probably grew from learning to type at an early age, before home-row was presented to me.
Is this acceptable? Do people religiously endorse home-row even with low-mistake poking without looking at the keyboard?

As long as your typing is fast and accurate, no one has a right to judge...

90 wpm is a good typing speed. As long as you're being as efficient as possible, go for it!
The only reason you may want to learn home-row typing is so that you can know how fast you are using that method. If you're doing 90 by poking, you'd probably be a speed demon with home-row!

90WPM is a good typing speed. In fact people can easily learn and improve there typing speed with the help of many touch typing software in the market. These software are easily available in the market and easy to learn as well.

if you are typing 90 wpm with two fingers then by all means don't stop in my opinion. I learned via the home-row style and only average around 30 - 50 wpm. :)
I actually got a D in my typing class (still typewriter) and I remember telling my teacher, I'm never going to use this....
Ah, famous last words of a youngen... :)

There are people who are religious about it. But if you can type 90 wpm with two fingers, who cares what other people think? Your two fingers are faster than my 10.
Here's someone who is religious about it. Makes for amusing reading:
http://steve-yegge.blogspot.com/2008/09/programmings-dirtiest-little-secret.html
I second Robert Cartaino's suggestion about putting a video on youtube. I want to see this! No video editing tricks allowed.

Speed and accuracy is more important than how many fingers you type with. Perhaps more important though is how much concentration your typing method requires. This is an area where touch typing has an advantage.
Programmers that don't need to stop and look at their keyboard have a higher probability of putting their ideas into code quickly and efficiently. It is argued that non-touch typists may be more prone to taking shortcuts that sacrifice code quality.
Jeff Atwood has a nice blog post on this subject which contains a reference to a much longer rant on Stevey's Blog.

My impression is that people who know how to touch type tend to write more documentation in their code - just because it's easy. That's why I would check the typing speed of programmers when hiring, and 10 finger typists would have some bonus.
By the way: learning home-row typing is probably much easier for you than you think. Just understand the basic principle and remind yourself periodically how it should be done and how to place your hands on the keyboard, and you will be typing faster and more conveniently in a couple of weeks. There is no need to take a course or something.

Who in hell would care how you use a keyboard?
Our code is our art, somebody caring how you typed it is like someone refusing to buy art because of how an artist holds the pencil.
So, this question could not be more irrelevant. Voting to close.

In middle school I typed blazingly fast using the first two fingers of each hand and both thumbs. I was made to take a touch-typing class (on horrible IBM Selectrics), forgot the system I'd invented, and have never been able to recover the speed I had when I was eleven. If the shoe fits, then by all means wear it.

My concern wouldn't be about whether your typing style is acceptable, but whether it's safe. Although I don't have any evidence that this is the case, it's possible that your differing typing style might make you more prone to repetitive stress injuries such as carpal tunnel. If you start having pain problems, consider trying to learn to type more traditionally and it might help.
I probably only make around 60 WPM myself (depends what you consider a "word" -- insert two-byte joke here) so I don't think your speed is anything to worry about.

Related

Computer Program more intelligent that the programmer? Is it possible? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Today, I asked a psychologist how can you design an IQ-test to assess a man more intelligent than the designer? He answered me, the same way you can design a chess game program, the designer cannot win it!
However, I'm not sure as a beginner if the question can be answered here, but it is interesting for me if we can write a program can evolve and learn by itself such that a human (even the programmer) cannot predict it. I hope the answer to be false, otherwise, there may be viruses or worms in the future with not predictable behavior, controlling the human society!
Artificial intelligence agents behave within some programmed space (a chess-playing agent is inside the chess-playing space).
Agents cannot leave the programmed space. A chess-playing agent is unlikely to take over the world any time soon. It is predictable in this sense.
The behaviour in this space is somewhat predictable (this behaviour is after all based on well-defined mathematical equations) (these equations are usually quite complex, so not easily predictable, but possible), but there is usually some randomness involved, which is obviously not predictable.
Note "intelligence" is not the same as predictability. Researchers have been trying to make AI truly intelligent for a long time, with (arguably) slow progression.
EDIT:
Note that some agents can have its programmed space be the entire world. This doesn't enforce a lot of boundaries.
By 'programmed space' I don't mean what was programmed into the agent as much as what the agent is programmed to observe or do. If an agent can only see a chess board and only make chess moves, how will it ever become more than a chess-playing agent?
True evolution may allow agents to extend their programmed space, but I'll have to think about whether this is actually possible.
It is possible. Chess programs actually do beat their designers by wide margins. They beat the world-champions in Chess by smaller margins but that is a matter of time.
There is an example for a system that "learns how to learn even faster": evolution on earth has optimized itself. Genes and behavior are optimized to facilitate a high rate of marginal improvement. Reproduction almost never fails and genes have just the right amount of mutation due to natural defects (radiation, chemical processes, ...). The "tunables" have been set nicely.
I think your text in the question describes two situations:
The first alinea covers an IQ-test and a chess-game. Both have a limited amount of options. Even though there are a lot of possibilities, it's limited to a certain number and a lot can be ruled out from start since their use is too low to even consider. Therefore, programs like these exist. Do notice though, there are still A LOT of possibilities, and that's why it isn't perfect yet.
The second alinea covers a self learning program or robot. In the real world, there is an infinite amount of possibilities, things that can happen. You might try to code a program, but there is no way (in the near future) you can take amount of all the things life has to offer.
I do have to comment on Dukeling's comment below. If you manage to code a program which can learn to react on pretty much everything life has to offer (including the negative parts), an AI like that will probably evaluate his own 'space' and will be able to look and even step outside of it.
Long story short: it will happen. What its result will be is unknown: either robots are programmed perfectly, or will be shutdown, or the human race will be extinct. Every scenario is possible thanks to the fact we will advance in technology at such pace you can't even imagine right now. Have your doubts? Go tell someone 50 years ago a machine beats the best player in chess.

Is bit twiddling a good test for embedded engineer [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am questioning candidates for embedded software engineers (in our company we use mostly C, sometimes C++).
I usually give to the candidate a bit twiddling question. I do not mention that it can be solved with bit twiddling, when it is not obvious. I also accept solutions without using bit operations, but then I guide the candidate into bits (e.g. by saying: "What if you can't use modulo operator"). The example question could be:
check if a number is divisible by 2 and not divisible by 4
round up a number to next power of 2
count set bits in a word
find the oldest bit that is set in a word
etc.
In my opinion such questions should be easily solved within one minute by any decent software engineer (or even fresh graduate), regardless in which software domain he works. However my boss recently complains that these questions are too low-level for someone that is going to work e.g. in GUI. It came to the situation, where my questioning was useless, because my boss hired a candidate that utterly failed in this topic, but he claimed that he was experienced in Qt (I could not check this, as I have never used Qt, and for some reason I was the only one that could interview him at this moment).
So my question is: is bit twiddling a good question for any (embedded) software engineer, or my boss is right and I should give up asking it to all candidates?
I think the thing to be careful about is that you aren't asking for knowledge where it's basically random whether the interviewee has it to hand or not. For your examples, I cannot immediately remember what the bit-twiddling hacks are for the most-significant bit questions (finding it, and rounding up to the next power of 2). I know that there are hacks that I could look up, and I know about __builtin_clz on GCC which of course isn't portable.
So, have I failed the interview? And if so, are you sure that I'm not fit to program Qt in your company? You're implementing a filter, and you just have to think about the false positive rate (how many people will know the bit-twiddles but be poor employees) and the false negative rate (how many good employees have forgotten the bit-twiddles).
Of course all interviews have a false negative rate. The cost to your company hiring a bad employee is pretty high, so you have to be certain. For that reason, I think that if this employee turns out to be good, that's probably more by luck than judgement on the part of your company -- you shouldn't hire someone to be a Qt programmer without testing their ability with Qt or something similar. What's the cost of setting up a second interview with each of the three best candidates, taken by one of your Qt people, compared with the value of hiring the best one? What's the cost of your time to interview the candidates, given that this interview has no influence on the hiring decision?
Just remember that different kinds of programmers have different knowledge at their immediate command, and make sure that your questions are strongly correlated with the kind of programmer you want. On this particular subject, what it proves is that the interviewee has spent time bit-twiddling, possibly in the fairly recent past. On the plus side, that's time they spent programming. On the minus side, that's time they didn't spend writing and learning Qt.
I would be concerned if someone didn't know how to check the lsb, and didn't know that for positive values and for 2's complement negative values, the lsb is 1 for odd numbers and 0 for even numbers. That's because I expect programmers to know what a binary representation is. I would also be concerned if someone thought that x & 1 was likely to be better in some way than x % 2, because it means they use a terrible compiler ;-)
I wouldn't be too concerned if someone couldn't remember the bit-twiddle for popcount. Slightly more if they couldn't figure out the code for it after you'd given them a heavy hint of one of the simpler ways to do it: "How could I compute the popcount of a 2 bit integer using bit-twiddling? OK, now write a line to do that in parallel for each of the top and bottom 2 bits of a 4 bit integer. OK, now write a 32-bit popcount".
If you are programming in C, I would say yes. Bit twiddling is not something you do everyday, but something which shows that you understand how the computer handles your data.
If you program in a higher level language, I suppose it is less important.
for an embedded engineer or a software enigineer that will have to cope with talking to specific hardware etc these questions are good. There's a likely chance they will have to write code that fiddles wih bits then. Not sure if this can be solved within exactly 1 minute though, especially not if the engineer in question wants to do it in a generic way.
My opinion is that every electronics and/or software engineer must know these things. These are the basics of digital computing and computer programming.
I think this depends on what kinds of embedded systems they'll be working on, and what they'll be doing with them; "embedded" covers a very wide range of hardware. If they'll need to write low-level code or work with small microcontrollers, things like bit twiddling matter more than for higher level code on something bigger like a PC-compatible SBC.
However, even when using a more powerful system and not writing low-level code, it's still important to be somewhat familiar with bitwise operations -- at least enough to handle bit flags and the like (at a minimum).

Theory of computation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
What is the use and importance of studying theory of computation?
I had course on the same subject during graduation, but I did not study it seriously.
I also found the following link where some video lectures are available:
Theory of Computation
Shai Simonson's classes are really very good. I have listened to them. As he says in the initial lecture, 'Theory of Computation' is a study of abstract concepts. But these abstract concepts are really very important to better understanding of the field of computing, as most of the concepts we deal with have lot of abstract and logical under pinnings.
As John Saunders said, you can become a programmer, even a good one if you know the programming language well. But the knowledge of what is going underneath will always makes you an enlightened one. So go ahead and learn it again (NB: I understand why you didn't study it seriously at college. Most of the teachers in our colleges aren't that good at explaining this topic (I too had a lousy teacher), but I assure you the teacher here is the best you can get.
I think every computer science student should know some of computation theory, even you won't do any research.
Some concepts are just universal and you will encounter them again and again in other courses. E.g. finite state machines, you need to know them when you are learning string matching algorithms, and compilers. Another example, you will learn some reduction algorithms (transforming from one model to another model) in computation theory, these things teach you how to think abstractly and algorithmically.
The greatest of all human faculties is the power of abstraction. That is what separates us from the animals. The more we exercise this power, the more successful we are in solving problems.
Playing chess may seem a futile pastime to some and never of practical use to any, but it goes a long way to give the player the ability to think ahead every time an important decision is to be made.
Besides, it reveals the elegance and simplicity that is hidden beneath layers of ugly syntax and brain-dead code we sift through every day just to make a living.
In addition to the usefulness of various tools (regular expressions, context free grammars, state machines etc.) in your daily life as a programmer, a good theoretical computer science course will have taught you how to model certain problems in a way that you can tackle effectively.
Solutions that seem clever to people without training in this discipline will seem natural and "the right way" to people who have. I recommend that you pay close attention to what's going on in your course since it will give you a very powerful toolset that will help you as a programmer and as an abstract thinker.
The importance of computation theory will depend on what you do with your life. If you want to be a computer scientist, then it is an important basis for your future studies.
If you just want to be a programmer or software engineer, then you will probably never use the knowledge again.
Theory of computation is sort of a hinge point among computer science, linguistics, and mathematics. If you have intellectual curiosity, then expose yourself to the underlying theory. If you just want to dip lightly into making computers do certain things, you can probably skip it. Me? I loved it. But I also liked topology, so I may not be a typical developer in that respect.
It’s really not without its practical aspects in regards to software engineering.
For example, you may be tempted to parse some programming language as input to your program with regular expressions.
Computer science theory proves why this is a bad idea (most programming language syntaxes are not regular), and it can never be overcome no matter how much you'd like to try.
Other examples may include NPC problems, etc.
Basically, computer science theory can teach you many important things with regards to reasoning. But it also describes the fundamental limits to programming and algorithms.
"Know your limits"
Some practical examples:
Before spending a lot of time on a problem, you'll want to know:
If the problem can't be solved.
If there is a "good" (polynomial) solution, as some problems may not have good" solutions (or at least, not ones we currently know of ;))
(A bit less practical) you'll want to know if a problem is "harder" than another, that is, takes more time/space.
Every computer science engineering has to learn theory of computation as it plays a vital role...The theory of computation forms the basis for:
Writing efficient algorithms that run in computing devices.
Programming language research and their development.
Efficient compiler design and construction.
Theory of computation studies the basic primitives necessary to handle computable problems. What counts as "computable" is tacitly understood to be von Neumann machine-style processing and distinct from a Lisp machine. (The Church–Turing thesis says these are ultimately equal, but, in practice they create two very different models of computation.)
For example, here's probably a minimal set to implement basic Turing functionality in a von Neumann-style machine:
MOVE data
AND, OR, and NOT transforms
a COMPARE operator
JUMPIFEQUAL function
To get universal Turing functionality, you'll have to probably have memory addressing and a call stack.
As Man's mind gets more complex, the target for what counts as "computable" gets higher and higher. There probably is no upper bound.

How long to learn C? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm a C# programmer and I'm sold on the benefits of learning C. I want to deepen my knowledge of the underlying OS and CPU, understand the pain of memory management that garbage collection encapsulates away and generally improve my high-level programs thanks to an appreciation of the low-level issues that the compiler is dealing with on my behalf.
My question is how long can I expect to spend learning the C language in order to gain these benefits?
Is a couple of weekends spent reading the K&R book from cover to cover sufficient, or do I need to schedule time to cut some code? Do I need to spend time delving into any libraries, or is an understanding of the first-order concepts in the language enough to improve my C# code?
To be clear, I don't intend to write any significant programs in C. My goal is more to learn from the language than to become an expert in the language.
C will take a week to learn, and a lifetime to master.
Reading a K&R book and not writing code is like reading a book on weapons and never actually shooting. Yes, you've read in a book, that it works this way, but you have never encountered the typical problems that arise while doing this. Without practice such "knowlegde" is worth very little.
Plan to spend 2-3 years slowly writing small programs for solving different tasks in C. This will count as real experince. C provides delayed gratification for your effort.
I'm not sure how long it takes to learn a language - it probably comes down to the individual. But I'm pretty confident you can't learn one without writing and debugging code in it.
Ten Years
If you can read K&R and understand it all, that's pretty good, as K&R covers pretty much all of the language.
However, reading it and understanding it all are very different. You should probably take a few passes through K&R and do all the associated exercises to ensure you really know it.
Even after reading through all of that, you will spend more months learning pointers the hard way. Expect lots of seg faults. On the plus side though, you'll get really good at reading hex!
There are a few caveats that the language has that you'll find out as well. One that used to give me trouble is that all pointers are the same size (4 bytes on x86), regardless of what they point at. A char* is the same size as a void* and an int*.
It will take a lot lot longer if you just sit around asking abstract questions and not actually diving in and doing it. Do you have a deadline or something? How long will it take me to learn the piano? Who cares, I just wanna make some noise. That's how kids learn so fast. They don't care about becoming an expert, or even good. They just like to play.
In any case, if you want to learn some interesting things, try some assembler as well. A lot of people really hate it, but that's just because they don't like spending countless hours not accomplishing much. I like it just fine.
You definitely need to write some code - I don't believe you can learn any language without doing that. K&R has lots of exercises you can practice on. It's difficult to know how long in terms of elapsed time it will take to get a good working knowledge - I used to teach pretty much the whole language in 4.5 days, but that is quite intensive. I'd suggest about a month, if you are doing an hour or so a day.
Edit: I must admit, I find it a bit depressing that so many people think C is so difficult. K&R is 272 pages long, in my copy, and covers basically everything you need to know, including the standard library. Is there book in ANY other programming language that covers the whole shebang so concisely? I don't think so, and the reason is not that K&R is compressed in some way (Brian Kernighan is THE greatest techical writer, IMHO) but that the language is simple and easy to describe.
I read the K&R book cover to cover and would not say I have any great understanding of C. Some time doing the exercises in K&R would be hugely beneficial.
I'm sure C libraries would make you more productive writing programs, but if it is simply learning C you are interested in, then you can implement anything yourself that you need. www.projecteuler.net is a good source of problems (although slightly mathematical in general) for you to get started on, if you fancy trying some coding outside of the K^R exercises.
In a couple of weekends, you will obtain mainly two results:
hello world
a lot of segmentation faults
C is not easy, in particular if you are not used to its hardcore concept. You will have to invest weeks, even months in tinkering with it, to grasp the most obscure (but still not too much) essence.
40 days and 40 nights.
If you can't do the days and nights sequentially, then it will be 42 weekends.
But seriously, without putting any context on how fast you learn other topics, nobody can give you a real answer that is relevant to you. We can say how long it took us to learn it to a satisfying level, but that has zero correlation to how long it should take you to learn it.
If you said it took you 6 months to be good at C#, then maybe we can say it should take you 6 months * X (where X is still a guess, but a better guess than now).
We can all agree, however, that just reading the book is not enough. Of course you will have to write code. That is how we best learn anything - read it, write it, teach it. If you really want to learn something, teach it.
To understand the pain of memory management just being writing sample programs with stacks, linked lists, binary trees, etc. You'll see what you're getting into.
In school i was taught C as the introductory language and as pointers got introduced a whole slew of individuals dropped the class because frankly it's a hard concept to grasp.
As many of the other answers have stated... Plan to not only read but practice. There's no doubt that you haven't learned alot from C# by just making mistakes while coding and having 'aha!' moments.
IMO: 3 to 4 years to really understand the majority of concepts. A book will help you realize what the capabilities of the language are.

When is theoretical computer science useful?

In class, we learned about the halting problem, Turing machines, reductions, etc. A lot of classmates are saying these are all abstract and useless concepts, and there's no real point in knowing them (i.e., you can forget them once the course is over and not lose anything).
Why is theory useful? Do you ever use it in your day-to-day coding?
True story:
When I got my first programming job out of graduate school, the guys that owned the company that I worked for were pilots. A few weeks after I was hired, one of them asked me this question:
There are 106 airports in Arkansas.
Could you write a program that would
find the shortest rout necessary to
land at each one of them?
I seriously thought he was quizzing me on my knowledge of the Traveling Salesman Problem and NP-Completeness. But it turns out he wasn't. He didn't know anything about it. He really wanted a program that would find the shortest path. He was surprised when I explained that there were 106-factorial solutions and finding the best one was a well-known computationally intractable problem.
So that's one example.
When I graduated from college, I assumed that I was on par with everyone else: "I have a BS in CS, and so do a lot of other people, and we can all do essentially the same things." I eventually discovered that my assumption was false. I stood out, and my background had a lot to do with it--particularly my degree.
I knew that there was one "slight" difference, in that I had a "B.S." in CS because my college was one of the first (supposedly #2 in 1987) in the nation to receive accreditation for its CS degree program, and I graduated in the second class to have that accreditation. At the time, I did not think that it mattered much.
I had also noticed during high school and in college that I did particularly well at CS--much better than my peers and even better than many of my teachers. I was asked for help a lot, did some tutoring, was asked to help with a research project, and was allowed to do independent study when no one else was. I was happy to be able to help, but I did not think much about the difference.
After college (USAFA), I spent four years in the Air Force, two of which were applying my CS degree. There I noticed that very few of my coworkers had degrees or even training related to computers. The Air Force sent me to five months of certification training, where I again found a lack of degrees or training. But here I started to notice the difference--it became totally obvious that many of the people I encountered did not REALLY know what they were doing, and that included the people with training or degrees. Allow me please to illustrate.
In my Air Force certification training were a total of thirteen people (including me). As Air Force officers or the equivalent, we all had BS degrees. I was in the middle based on age and rank (I was an O-2 amongst six O-1s and six O-3s and above). At the end of this training, the Air Force rubber-stamped us all as equally competent to acquire, build, design, maintain, and operate ANY computer or communication system for ANY part of the Department of Defense.
However, of the thirteen of us, only six had any form of computer-related degree; the other seven had degrees ranging from aeronautics to chemistry/biology to psychology. Of the six of us with CS degrees, I learned that two had never written a program of any kind and had never used a computer more than casually (writing papers, playing games, etc.). I learned that another two of us had written exactly one program on a single computer during their CS degree program. Only one other person and myself had written more than one program or used more than one kind of computer--indeed, we found that we two had written many programs and used many kinds of computers.
Towards the end of our five-month training, our class was assigned a programming project and we were divided into four groups to separately undertake it. Our instructors divided up the class in order to spread the "programming talent" fairly, and they assigned roles of team lead, tech lead, and developer. Each group was given a week to implement (in Ada) a full-screen, text-based user interface (this was 1990) for a flight simulator on top of an instructor-provided flight-mechanics library. I was assigned as tech lead for my team of four.
My team lead (who did not have a computer degree) asked the other three of us to divide up the project into tasks and then assigned a third of them to each of us. I finished my third of the tasks by the middle of that first day, then spent the rest of the day helping my other two teammates, talking to my team lead (BSing ;^), and playing on my computer.
The next morning (day two), my team lead privately informed me that our other two teammates had made no progress (one could not actually write an "if" statement that would compile), and he asked me to take on their work. I finished the entire project by mid-afternoon, and my team spent the rest of the day flying the simulator.
The other guy with the comparable CS degree was also assigned as a tech lead for his team, and they finished by the end of day three. They also began flying their simulator. The other two teams had not finished, or even made significant progress, by the end of the week. We were not allowed to help other teams, so it was left at that.
Meanwhile, by the middle of day three, I had noticed that the flight simulator just seemed to behave "wrong". Since one of my classmates had a degree in aeronautics, I asked him about it. He was mystified, then confessed that he did not actually know what made a plane fly!?! I was dumbfounded! It turns out that his entire degree program was about safety and crash investigations--no real math or science behind flight. On the other hand, I had maybe a minor in aeronautics (remember USAFA?), but we had designed wings and performed real wind tunnel tests. Therefore, I quietly spent the rest of the week rewriting the instructor-provided flight-mechanics library until the simulator flew "right".
Since then, I have spent nearly two decades as a contractor and occasionally as an employee, always doing software development plus related activities (DBA, architect, etc.). I have continued to find more of the same, and eventually I gave up on my youthful assumption.
So, what exactly have I discovered? Not every one is equal, and that is okay--I am not a better person because I can program effectively, but I am more useful IF that is what you need from me. I learned that my background really mattered:
growing up in a family of electricians and electrical engineers,
building electronics kits,
reading LITERALLY every computer book in the school/public libraries because I did not have access to a real computer,
then moving to a new city where my high school did have computers,
then getting my own computer as a gift,
going to schools that had computers of many different sizes and kinds (PCs to mainframes),
getting an accredited degree from a VERY good engineering school,
having to write lots of programs in different languages on different kinds of computers,
having to write hard programs (like my own virtual machine with a custom assembly language, or a Huffman compression implementation, etc.),
having to troubleshoot for myself,
building my own computers from parts and installing ALL the software,
etc.
Ultimately, I learned that my abilities are built on a foundation of knowing how computers work from the electrical level on up--discrete electronic components, circuitry, subsystems, interfaces, protocols, bits, bytes, processors, devices, drivers, libraries, programs, systems, networks, on up to the massive enterprise-class conglomerates that I routinely work on now. So, when the damn thing misbehaves, I know exactly HOW and WHY. And I know what cannot be done as well as what can. And I know a lot about what has been done, what has been tried, and what is left relatively unexplored.
Most importantly, after I have learned all that, I have learned that I don't know a damned thing. In the face of all that there is potentially to know, my knowledge is miniscule.
And I am quite content with that. But I recommend that you try.
Sure, it's useful.
Imagine a developer working on a template engine. You know the sort of thing...
Blah blah blah ${MyTemplateString} blah blah blah.
It starts out simple, with a cheeky little regular expression to peform the replacements.
But gradually the templates get a little more fancy, and the developer includes features for templatizing lists and maps of strings. To accomplish that, he writes a simple little grammar and generates a parser.
Getting very crafty, the template engine might eventually include a syntax for conditional logic, to display different blocks of text depending on the values of the arguments.
Someone with a theoretical background in CS would recognize that the template language is slowly becoming Turing complete, and maybe the Interpreter pattern would be a good way to implement it.
Having built an interpreter for the templates, a computer scientist might notice that the majority of templating requests are duplicates, regenerating the same results over and over again. So a cache is developed, and all requests are routed through the cache before performing the expensive transformation.
Also, some templates are much more complex than others and take a lot longer to render. Maybe someone gets the idea to estimate the execution of each template before rendering it.
But wait!!! Someone on the team points out that, if the template language really is Turing complete, then the task of estimating the execution time of each rendering operating is an instance of the Halting Problem!! Yikes, don't do that!!!
The thing about theory, in practice, is that all practice is based on theory. Theoretically.
The things I use most:
computational complexity to write algorithms that scale gracefully
understanding of how memory allocation, paging, and CPU caching work so I can write efficient code
understanding of data structures
understanding of threading, locking, and associated problems
As to that stuff on Turing machines etc. I think it is important because it defines the constraints under which we all operate. Thats important to appreciate.
it's the difference between learning algebra and being taught how to use a calculator
if you know algebra, you realize that the same problem may manifest in different forms, and you understand the rules for transforming the problem into a more concise form
if you only know how to use a calculator, you may waste a lot of time punching buttons on a problem that is either (a) already solved, (b) cannot be solved, or (c) is like some other problem (solved or unsolved) that you don't recognize because it's in a different form
pretend, for a moment, that computer science is physics... would the question seem silly?
A friend of mine is doing work on a language with some templates. I was asked in to do a little consulting. Part of our discussion was on the template feature, because if the templates were Turing complete, they would have to really consider VM-ish properties and how/if their compiler would support it.
My story is to this point: automata theory is still taught, because it still has relevance. The halting problem still exists and provides a limit to what you can do.
Now, do these things have relevance to a database jockey hammering out C# code? Probably not. But when you start moving to a more advanced level, you'll want to understand your roots & foundations.
Although I don't directly apply them in day-to-day work, I know that my education on formal computer science has affected my thinking process. I certainly avoid certain mistakes from the onset because I have the lessons learned from the formal approaches instilled in me.
It might seem useless while they're learning it; but I bet your classmate will eventually comes across a problem where they'll use what they were taught, or at least the thinking patterns behind it...
Wax on... Wax off... Wax on... Wax off... What does that have to do with Karate, anyways?
At one job I was assigned the task of improving our electrical distribution model's network tracing algorithm as the one they were using was too slow. The 3-phase network was essentially three n-trees (since loops aren't allowed in electrical networks). The network nodes were in the database and some of the original team couldn't figure out how to build an in-memory model so the tracing was done by successive depth SELECTs on the database, filtering on each phase. So to trace a node ten nodes from the substation would require at least 10 database queries (the original team members were database whizzes, but lacked a decent background in algorithms).
I wrote a solution that transformed the 3 n-tree networks of nodes from the database into a data structure where each node was stored once in a node structure array and the n-tree relationship was converted to three binary trees using doubly-linked pointers within the array so that the network could be easily traced in either direction.
It was at least two orders of magnitude faster, three on really long downstream traces.
The sad thing was that I had to practically teach a class in n-trees, binary trees, pointers, and doubly-linked lists to several of the other programmers who had been trained on databases and VB in order for them to understand the algorithms.
It's a classic dichotomy, between "how" and "what". Your classmates are looking at "how" to program software, and they're very focused on the near focus; from that perspective, the perspective of implementation, it seems like knowing things like halting states and Turing machines are unimportant.
"How" is very little of the actual work that you get expected to do with Computer Science, though. In fact, most successful engineers I know would probably put it at less than 20 percent of the actual job. "What" to do is by far more important; and for that, the fundamentals of Computer Science are critical. "What" you want to do relates much more to design than implementation; and good design is... well, let's just call it "non-trivial".
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Good luck with your studies!
I think understanding the fundamental models of computation is useful: sure you never need to be able to translate a Turing machine into a register machine in practice, but learning how to see that two very different problems are really instances of the same concept is a critical skill.
Most knowledge is not "practical", but helps you connect dots in ways that you cannot anticipate, or gives you a richer vocabulary for describing more complex ideas.
It's not the specific problems that you study that matters, it's the principles that you learn through studying them. I use concepts about algorithms, data structures, programming languages, and operating systems every day at my job. If you work as a programmer you'll make decisions all the time that affect system performance. You need to have a solid foundation in the fundamental abstract concepts in order to make the right choices.
After i graduated from CS I thought similarly: the whole bunch of theories that we studied are completely useless in practice. This proved to be right for a short period of time, however the moment you deal with complex tasks, theory is definitely MORE VALUABLE than practice. every one after few years of coding can write some programs that "work" but not every one is able to understand how. no matter what most of us will deal at a certain point with performance issues, network delays, precission, scalability etc. At this stage the theory is critical. in order to design a good solution when dealing with complex systems is very important to know how the memory management works, the concepts of process and threads, how memory is assigned to them, or efficient data structures for performance and so on.
One time for example i was working on a project including plenty of mathematical calculations and at a certain point our software failed. while debugging i figured out that after some mathematical operation i received a number as DOUBLE of a value 1.000000000002 but from the mathematical perspective couldnt be > 1 which at some later stage in the program was giving the legendary NaN exception. i spent some time to figure this out but if i had paid more attention to the "approximation of Double and Float" lesson i would have not wasted that time.
If you work in a company that does groundbreaking work, it is important to be able to communicate to architects and developers what the benefits are. There is a lot of hype about all kinds of technologies and positioning yourself can be difficult. When you frame your innovation in scientific and theoretical terms you are definitely at an advantage and customers sense you are the real thing. I can tell folks: there is a new way to deal with state, encoding and nondeterminism (i.e. complexities) and you can definitely be more productive than you are today.
If you take the long view in your career learning about theory will give you depth, the depth you need to grow. The return on investment in learning your 5th or 6th programming language will be a lot less then learning your 2nd and 3rd. Exposure to theory will give you a sense for real engineering, about where the degrees of freedom are and how you can make the right trade-offs.
The important concepts 1) State, 2) Encoding, 3) Nondeterminism. If you don't know them they will not help you. What theory should provide you with is the big picture and a sense of how basic concepts fit together. It should help you hone your intuition.
Example: some of the answers above mention the halting problem and Turing machines. When I came across Turing's paper in college I did not feel enlightened at all. One day I came across Goedel's incompleteness theorem and Goedel numbering in particular. Things started to make a lot of sense. Years later I read about Georg Cantor at a bookstore. Now I really started to understand Turing machines and the halting problem. Try for yourself and look up "Cantor's Diagonal Argument" on Wikipedia. It is one of the most awesome things intellectually you will ever encounter.
Food for thought: A typical Turing machine is not the only way to design a state transition machine. A Turing machine with two rather than one tape would give you a lot more speed for a number of algorithms. http://www.math.ucla.edu/~ynm/papers/eng.ps
You can expose yourself to these insights more efficiently then I did by reading this book. Link at the bottom of this post. (At the very least, check out the table of contents on Amazon to get a taste of what this is all about):
I found the book by Rosenberg sensational. http://www.amazon.com/The-Pillars-Computation-Theory-Nondeterminism/dp/0387096388 If you have only one book on theory IMHO this should be the one.
I do not use it on a daily basis. But it gave me a lot of understanding that helps me each day.
I found that all I need for daily bliss from the CS theoretical world is the utterance of the mantra "Low coupling and High Cohesion". Roger S. Pressman made it scholarly before Steve McConnell made it fashionable.
Ya, I generally use state diagrams to design the shape and flow of the program.
Once it works in theory, I start coding and testing, checking off the states as i go.
I find that they are also a useful tool to explain the behavior of a process to other people.
Simple. For example: if I'm using RSACryptoServiceProvider I'd like to know what is that and why I can trust it.
Because C++ templates are actually some kind of lambda calculus. See www.cs.nott.ac.uk/types06/slides/michelbrink_types_06.pdf
I'm studying for my Distributed algorithms course now. There is a chapter about fault tolerance and it contains some proofs on the upper bound for how many processes can fail (or misbehave) so that the distributed algorithm can handle it correctly.
For many problems, the bound for misbehaving processes is up to one third of total number of processes. This is quite useful in my opinion because you know that it's pointless to try to develop a better algorithm (under given assumptions).
Even if theoretical courses aren't going to be used directly, it might help you think better of something.
You don't know what your boss is going to ask you to do, you may have to use something that you thought it won't be benefical, as Jeffrey L Whitledge said.
To be honest, I sort of disagree with a lot of the answers here. I wrote my first compiler (for fun; I really have too much coffee/free time) without having taken a course in compilers; basically I just scanned the code for another compiler and followed the pattern. I could write a parser in C off the top of my head, but I don't think I could remember how to draw a pushdown automaton if my life depended on it.
When I decided I wanted to put type inference in my toy (imperative) programming language, I first looked over probably five papers, staring at something called "typed lambda calculus" going what.... the.... ****....? At first I tried implementing something with "generic variables" and "nongeneric variables" and had no idea what was going on. Then I scrapped it all, and sat there with a notebook figuring out how I could implement it practically with support for all the things I needed (sub-typing, first-class functions, parameterized types, etc.) With a couple days of thinking & writing test programs, I blew away more than a weeks worth of trying to figure out the theoretical crap.
Knowing the basics of computing (i.e. how virtual memory works, how filesystems work, threading/scheduling, SMP, data structures) have all proved HIGHLY useful. Complexity theory and Big-O stuff has sometimes proved useful (especially useful for things like RDBMS design). The halting problem and automata/Turing Machine theory? Never.
I know this is old, but my short reply to those who claim that theory is 'useless' and that they can practice their profession without it is this:
Without the underlying theory, there is no practice.
Why is theory useful?
Theory is the underlying foundation on top of which other things are built. When theory is applied, practice is the result.
Consider computers today. The common computer today is modeled and built on top of the Turing Machine, which, to keep it simple, is an abstract/theoretical model for computation. This theoretical model lies at the foundation of computing, and all the computing devices we use today, from high-end servers to pocket phones, work because the underlying foundation is sound.
Consider algorithm analysis. In simple terms, algorithm analysis and time-complexity theory have been used to classify problems (e.g. P, NP, EXP, etc) as well as how the algorithms we have behave when trying to solve different problems in different classes.
Suppose one of your friends gets a job at some place X and, while there, a manager makes a few simple requests, such as these examples:
Ex 1: We have a large fleet of delivery vehicles that visit different cities across several states. We need you to implement a system to figure out what the shortest route for each vehicle is and choose the optimal one out of all the possibilities. Can you do it?
Thinking the theory is 'useless' your friends don't realize that they've just been given the Traveling Salesman Problem (TSP) and start designing this system without a second thought, only to discover their naive attempt to check all the possibilities, as originally requested, is so slow their system is unusable for any practical purposes.
In fact, they have no idea why the system works at an "acceptable" level when checking 5 cities, yet becomes very slow at 10 cities, and just freezes when going up to only 40 cities. They reason that it's only "2x and 8x more cities than the 5 city test" and wonder why the program does not simply require "2x and 8x more time" respectively...
Understanding the theory would've allowed them to realize the following, at least at a glance:
It's the TSP
The TSP is NP-hard
Their algorithm's order of growth is O(n!)
The numbers speak for themselves:
+--------------+-------+-----------------------------------------------------------------+
| No. Cities | O(N!) | Possibilities |
+--------------+-------+-----------------------------------------------------------------+
| 5 | 5! | 120 |
| 10 | 10! | 3,628,800 |
| 40 | 40! | 815,915,283,247,897,734,345,611,269,596,115,894,272,000,000,000 | <-- GG
+--------------+-------+-----------------------------------------------------------------+
They could've realized at the outset that their system was not going to work as they imagined it would. The system was later considered impractical and cancelled after a significant amount of time, effort, and other resources had been allocated to, and ultimately wasted on, the project --and all because thought "theory is useless".
So after this failure, the managers think "Well, maybe that system was underestimated; after all, there're a LOT of cities in our country and our computers are simply not as fast as we need them to be for our recently cancelled system to have been a success".
The management team blames slow computers as the cause of the project's failure. After all, they're not experts in CS theory, don't need to be, and those who're supposed to be the experts on the topic and could've informed them, didn't.
But they have another project in mind. A simpler one actually. They come the week later and ask say the following:
Ex 2: We have only a few servers and we have programmers who keep submitting programs that, due to unknown reasons, end up in infinite cycles and hogging down the servers. We need you to write a program that will process the code being submitted and detect whether the submitted program will cause an infinite cycle during its run or not, and decide whether the submitted program should be allowed to run on this basis. Can you do it?
Your dear friend accepts the challenge again and goes to work immediately. After several weeks of work, there're no results, your friend is stressed, and doesn't know what to do. Yet another failure... your friend now feels "dumb" for not having been able to solve this "simple problem"... after all, the request itself made it sound simple.
Unfortunately, your friend, while insisting that "theory is useless" didn't realize that the, allegedly simple, request was actually an intractable problem about decidability (i.e. the halting problem itself), and that there was no known solution for it. It was an impossible task.
Therefore, even starting work to solve that particular problem was an avoidable and preventable mistake. Had the theoretical framework to understand what was being requested been in place, they could've just proposed a different, and achievable, solution... such as implementing a monitoring process that can simply kill -SIGTERM <id> of any user process (as per a list of users) that monopolizes the CPU for some arbitrary/reasonable interval under certain assumptions (e.g. we know every program run should've terminated within 10 minutes, so any instance running for 20+ minutes should be killed).
In conclusion, practice without the theory is like a building without a foundation. Sooner or later, the right amount of pressure from the right angle will make it collapse in on itself. No exceptions.
Do you ever use it in your day-to-day coding?
Yes, but not directly. Rather, we rely on it indirectly. The caveat here is that different theoretical concepts will be more or less applicable depending on the problem domain you happen to be working on.
Surely, we:
use computers daily, which relies on computational models (e.g. turing machines)
write code, which relies on computability theory (e.g. what's even computable) and lambda calculus (e.g. for programming languages)
rely on color theory and models (e.g. RGB and CMYK color models) for color displays and printing, etc.
Euler's theorems in computer graphics so that matrices can be built to rotate objects about arbitrary axes, and so on...
It's a fact that someone who simply use a plane to travel doesn't need to understand the theory that even allowed planes to be built and fly in the first place... but when someone is expected to build said machines and make them work... can you really expect a good outcome from someone who doesn't understand even the principles of flight?
Was it really a coincidence that, for most of history, no one was able to build a flying machine (and a few even died testing theirs) until the Wright brothers understood certain theoretical concepts about flight and managed to put them into practice?
It's no coincidence. We have a lot of working technology today because the people who built them understood, and applied, the theoretical principles that allowed them to work in the first place.
I guess it depends on which field you go into.

Resources