Why do people say C is more efficient? [closed] - c

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
People always say that C is more efficient than any other high level language.I don't understand why. I know assembly is efficient because it has a close relation to machine language .
But C and C++ or ruby,lets say,they are all going to be 'translated' into machine language,right? By more efficient,does it mean the machine code is better,or it takes less time to be 'translated' into machine code? What if there is some compiler or interpreter that can produce faster,also result in better machine code?

I know assembly is efficient because it has a close relation to machine language.
No, it does not. it has a 1.1 relation - it is a written representation of exact machine code commands. It is a mnemonic language - basically replacing byte codes with another representation. All higher langauges miss that.
But c and c++ or ruby,let say,they are all gonna be 'translated' into machine
language,right?
Yes, but the question is when and how efficient. low level languages - and C is one - allow less advanced constructs and are thus closer to assembler and easier for the compiler to optimize.
By more efficient,does it mean the machine code is better,or it takes less time to be
'translated' into machine code?
Outside of just in time compiled languages or interpreters NOONE cares about how much time it takes to translate. C is statically translated, once, then executed.
What if there is some compiler or interpreter that can produce faster,with better machine
code?
Then the statement is not true. Funny enough, that is not really the case - it is not that easy to make a super efficient compiler for higher languages. Basically you keep asking why a super sports car is so fast & state it would not be considered to so fast anymore when every Fiat Panda would have more horsepower - but sadly they don't have and never will have.

There are a lot of different issues at play here, so a full answer would be very long.
Some high-level languages are higher-level than others. C is not very high-level.
Different languages make different trade-offs. Some languages focus on ease of development, programmer productivity, preventing common errors, automation etc.
Others focus on speed/efficiency. C is one of the latter, partly due to its age and history.
Given the same level of effort, a C program is not necessarily faster than the equivalent in another languages, especially on modern multi-core systems. However, C exposes more possibilities for low-level optimisations, if you have the time to write them. The downside is that these optimisations are error-prone, and getting them wrong normally crashes your program completely.

Related

Is C Compiled or/and Interpreted? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
There are a lot of answers and quotations about "Compiled vs. Interpreted" and I do understand the differences between them.
When it comes to C, I am not sure: Is C a compiled or an Interpreted language, or both? And, if both I will really thankful if you add a bit of explanation.
A programming language is simply a textual representation of abstract principles. It is not compiled or interpreted - it is just text.
A compiler will take the language and translate it into machine language (assembly code), which can easily be translated into machine instructions (most systems use a binary encoding, but there are some "fuzzy" systems as well).
An interpreter will take the language and translate it into some byte-code interpretation that can be easily translated into a binary encoding on supported platforms.
The difference between the two is when that change occurs. A compiler typically will convert the text to machine language and package it into a binary file before the user runs the program (e.g. when the programmer is compiling it). An interpreter will typically do that conversion when the user is running the program. There are trade-offs for both approaches.
The whole point here is that the language itself is not compiled nor interpreted; it is just a textual standard. The implementation details of turning that text into machine instructions is where the compilation or interpretation choice is made.
It's typically compiled, although there is of course nothing preventing people from implementing interpreters.
It's generally wrong to classify languages as either/or; what is the language going to do? It's just a spec on paper, it can't prevent people from implementing it as either a compiler or an interpreter, or some combination/hybrid approach.
There are languages which are designed to make compilation easy, by giving the user only features that directly map to machine instructions, such as arithmetic, pointer manipulation, function calls (and indirect function calls which give you virtual dispatch). Interpretation of these is generally also easy, but particularly poor performance. C is one of these.
Other languages are designed for interpretation. These often have dynamic typing, lazy dispatch, dynamic (not lexical) scope of closures, reflection, dynamic codegen, and other features that make compilation incredibly difficult. Of course difficult is not the same as impossible, and some of these languages do end up with compilers as a result of Herculean efforts.

Computer Program more intelligent that the programmer? Is it possible? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Today, I asked a psychologist how can you design an IQ-test to assess a man more intelligent than the designer? He answered me, the same way you can design a chess game program, the designer cannot win it!
However, I'm not sure as a beginner if the question can be answered here, but it is interesting for me if we can write a program can evolve and learn by itself such that a human (even the programmer) cannot predict it. I hope the answer to be false, otherwise, there may be viruses or worms in the future with not predictable behavior, controlling the human society!
Artificial intelligence agents behave within some programmed space (a chess-playing agent is inside the chess-playing space).
Agents cannot leave the programmed space. A chess-playing agent is unlikely to take over the world any time soon. It is predictable in this sense.
The behaviour in this space is somewhat predictable (this behaviour is after all based on well-defined mathematical equations) (these equations are usually quite complex, so not easily predictable, but possible), but there is usually some randomness involved, which is obviously not predictable.
Note "intelligence" is not the same as predictability. Researchers have been trying to make AI truly intelligent for a long time, with (arguably) slow progression.
EDIT:
Note that some agents can have its programmed space be the entire world. This doesn't enforce a lot of boundaries.
By 'programmed space' I don't mean what was programmed into the agent as much as what the agent is programmed to observe or do. If an agent can only see a chess board and only make chess moves, how will it ever become more than a chess-playing agent?
True evolution may allow agents to extend their programmed space, but I'll have to think about whether this is actually possible.
It is possible. Chess programs actually do beat their designers by wide margins. They beat the world-champions in Chess by smaller margins but that is a matter of time.
There is an example for a system that "learns how to learn even faster": evolution on earth has optimized itself. Genes and behavior are optimized to facilitate a high rate of marginal improvement. Reproduction almost never fails and genes have just the right amount of mutation due to natural defects (radiation, chemical processes, ...). The "tunables" have been set nicely.
I think your text in the question describes two situations:
The first alinea covers an IQ-test and a chess-game. Both have a limited amount of options. Even though there are a lot of possibilities, it's limited to a certain number and a lot can be ruled out from start since their use is too low to even consider. Therefore, programs like these exist. Do notice though, there are still A LOT of possibilities, and that's why it isn't perfect yet.
The second alinea covers a self learning program or robot. In the real world, there is an infinite amount of possibilities, things that can happen. You might try to code a program, but there is no way (in the near future) you can take amount of all the things life has to offer.
I do have to comment on Dukeling's comment below. If you manage to code a program which can learn to react on pretty much everything life has to offer (including the negative parts), an AI like that will probably evaluate his own 'space' and will be able to look and even step outside of it.
Long story short: it will happen. What its result will be is unknown: either robots are programmed perfectly, or will be shutdown, or the human race will be extinct. Every scenario is possible thanks to the fact we will advance in technology at such pace you can't even imagine right now. Have your doubts? Go tell someone 50 years ago a machine beats the best player in chess.

Small C Code Optimizations (Hacks): Useless in today? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
20 years ago, there was (almost) no any compilers optimizations. So, we started to use some hacks, such as:
Use pointers, not array indexes.
Don't use small functions (such as swap()), use macros or write the code directly.
Today, we have complex compiler optimization. Array indexes and pointer are same. If we use -O3 (I know, it's dangerous), compiler will remove all functions except main().
So, the small hacks in the old books (Programming Pearls, The C Programming Language) are useless today? They are just make the code more unreadable?
Programming Pearls is about optimisation at the algorithm level, not at the code level, so it's still highly relevant today.
Code micro-optimisations are another story though, and many of the old tricks are now either redundant or even harmful. There are still important techniques that can be applied to performance-critical code today, but these also may become redundant/harmful at some point in the future. You need to keep up-to-date with advances in CPU micro-architecture and compiler technology and use only what's appropriate (and only when absolutely needed of course - premature optimisation being the root of all evil.)
"Use pointers, not array indexes."
This has never been more efficient. Even the old drafts of ANSI-C specified that they were equivalent:
3.3.2.1 Array subscripting
The definition of the subscript operator [] is that E1[E2] is identical to
(*(E1+(E2)))
"Don't use small functions (such as swap()), use macros or write the code directly."
This has been obsolete for quite a while. C99 introduced the inline keyword, but even before that, compilers were free to inline parts of the code. It makes no sense to write such function-like macros today for efficiency reasons.
"So, the small hacks in the old books (Programming Pearls, The C Programming Language) are useless today? They are just make the code more unreadable?"
Please note that what follows here is just my personal opinion and not a consensus among the world's programmer community: I would personally say that those two books are not only useless, they are harmful. Not so much because of various optimization tricks, but mainly because of the horrible, unreadable coding style and the heavy reliance on poorly-defined behavior. Both books are also filled with bugs and typos, so you can't even read them without the errata next to you.
Those hacks are still useful in case you are not allowed to turn on optimization for whatever reason. Sometimes the compiler will also not be able to optimize code as he does not know about intended and uninteded side effects of a certain piece of code.
It really depends on what requirements you have. To my experience there are still things you can express in better ways in order to make the compiler understanding your intention better. It's always a trade off to sacrifice readability in order to gain a better compilation result.
Basically, yes. But, if you do find a particularly ridiculous example of a missed optimization opportunity, then you should report it to the developers!
Braindead source code will always produce braindead machine code though: to a certain extent the compiler still has to do what you say, rather than what you meant, although many common idioms are recognised and "fixed" (the rule is that it has got to be impossible to tell that it's been altered without using a debugger).
And then there are still tricks, new and old, that are useful, at least on some architectures.
For example, if you have a loop that counts from 0 to 100 and does something to an array, some compilers might reverse the counter and make it go from 100 down to zero (because comparing against zero is cheaper than against another constant), but they can't do that if you loop has a side effect. If you don't care that the side-effect happens in reverse order then you can get better code if you reverse the counter yourself.
Another useful trick that GCC has is __builtin_expect(expr, bool), with which you can tell the compiler that expr is likely to be true or false, so it can optimize branches accordingly. Similarly, __builtin_unreachable() can tell GCC that something can't happen, so it doesn't have to allow for the case where it does.
In general though, the compiler is good enough that you really don't need to care unless your program spends 90% of its runtime in that one tiny function. (For example, memcpy is still typically written in assembler).

Why aren't programs written in Assembly more often? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
It seems to be a mainstream opinion that assembly programming takes longer and is more difficult to program in than a higher level language such as C. Therefore it seems to be recommend or assumed that it is better to write in a higher level language for these reasons and for the reason of better portability.
Recently I've been writing in x86 assembly and it has dawned on me that perhaps these reasons are not really true, except perhaps portability. Perhaps it is more of a matter of familiarity and knowing how to write assembly well. I also noticed that programming in assembly is quite different than programming in an HLL. Perhaps a good and experienced assembly programmer could write programs just as easily and as quickly as an experienced C programmer writing in C.
Perhaps it is because assembly programming is quite different than HLLs, and so requires different thinking, methods and ways, which makes it seem very awkward to program in for the unfamiliar, and so gives it its bad name for writing programs in.
If portability isn't an issue, then really, what would C have over a good assembler such as NASM?
Edit:
Just to point out. When you are writing in assembly, you don't have to write just in instruction codes. You can use macros and procedures and your own conventions to make various abstractions to make programs more modular, more maintainable and easier to read. This is where being familiar with how to write good assembly comes in.
HellŠ¾, I am a compiler.
I just scanned thousands of lines of code while you were reading this sentence. I browsed through millions of possibilities of optimizing a single line of yours using hundreds of different optimization techniques based on a vast amount of academic research that you would spend years getting at. I won't feel any embarrassment, not even a slight ick, when I convert a three-line loop to thousands of instructions just to make it faster. I have no shame to go to great lengths of optimization or to do the dirtiest tricks. And if you don't want me to, maybe for a day or two, I'll behave and do it the way you like. I can transform the methods I'm using whenever you want, without even changing a single line of your code. I can even show you how your code would look in assembly, on different processor architectures and different operating systems and in different assembly conventions if you'd like. Yes, all in seconds. Because, you know, I can; and you know, you can't.
P.S. Oh, by the way you weren't using half of the code you wrote. I did you a favor and threw it away.
ASM has poor legibility and isn't really maintainable compared to higher-level languages.
Also, there are many fewer ASM developers than for other more popular languages, such as C.
Furthermore, if you use a higher-level language and new ASM instructions become available (SSE for example), you just need to update your compiler and your old code can easily make use of the new instructions.
What if the next CPU has twice as many registers?
The converse of this question would be: What functionality do compilers provide?
I doubt you can/want to/should optimize your ASM better than gcc -O3 can.
I've written shedloads of assembler for the 6502, Z80, 6809 and 8086 chips. I stopped doing so as soon as C compilers became available for the platforms I was addressing, and immediately became at least 10x more productive. Most good programmers use the tools they use for rational reasons.
I love programming in assembly language, but it takes more code to do the same thing as in a high-level languge, and there is a direct correlation between lines of code and bugs. (This was explained decades ago in The Mythical Man-Month.)
It's possible to think of C as 'high level assembly', but get a few steps above that and you're in a different world. In C# you don't think twice about writing this:
foreach (string s in listOfStrings) { /* do stuff */ }
This would be dozens, maybe hundreds of lines of code in assembly, each programmer implementing it would take a different approach, and the next person coming along would have to figure it out. So if you believe (as many do) that programs are written primarily for other people to read, assembly is less readable than the typical HLL.
Edit: I accumulated a personal library of code used for common tasks, and macros for implementing C-like control structures. But I hit the wall in the 90s, when GUIs became the norm. Too much time was being spent on things that were routine.
The last task I had where ASM was essential was a few years ago, writing code to combat malware. No user interface, so it was all the fun parts without the bloat.
In addition to other people's answers of readability, maintainability, shorter code and therefore fewer bugs, and being much easier, I'll add an additional reason:
program speed.
Yes, in assembly you can hand tune your code to make use of every last cycle and make it as fast as is physically possible. However who has the time? If you write a not-completely-stupid C program, the compiler will do a really good job of optimizing for you. Probably making at least 95% of the optimizations you'd do by hand, without you having to worry about keeping track of any of it. There's definitely a 90/10 kind of rule here, where that last 5% of optimizations will end up taking up 95% of your time. So why bother?
If an average production program has say 100k lines of code, and each line is about 8-12 assembler instructions, that would be 1 million of assembler instructions.
Even if you could write all this by hand at a decent speed (remember, its 8 times more code that you have to write), what happens if you want to change some of the functionality? Understanding something you wrote a few weeks ago out of those 1 million instructions is a nightmare! There's no modules, no classes, no object-oriented design, no frameworks, no nothing. And the amount of similar looking code you have to write for even the simplest things is daunting at best.
Besides, you can't optimize your code nearly as well as a high level language. Where C for example performs an insane number of optimizations because you describe your intent, not only your code, in assembler you only write code, the assembler can't really perform any note-worthy optimizations on your code. What you write is what you get, and trust me, you can't reliably optimize 1 million instructions that you patch and patch as you write it.
Well I have been writing a lot of assembly "in the old days", and I can assure you that I am much more productive when I write programs in a high level language.
A reasonable level of assembler competence is a useful skill, especially if you work at any sort of system level or embedded programming, not so much because you have to write that much assembler, but because sometimes it's important to understand what the box is really doing. If you don't have a low-level understanding of assembler concepts and issues, this can be very difficult.
However, as for actually writing much code in assembler, there are several reasons it's not much done.
There's simply no (almost) need. Except for something like the very early system initialization and perhaps a few assembler fragments hidden in C functions or macros, all very low-level code that might once have been written in assembler can be written in C or C++ with no difficulty.
Code in higher-level languages (even C and C++) condenses functionality into far fewer lines, and there is considerable research showing that the number of bugs correlates with the number of lines of source code. Ie, the same problem, solved in assembler and C, will have more bugs in assembler simply because its longer. The same argument motivates the move to higher level languages such as Perl, Python, etc.
Writing in assembler, you have to deal with every single aspect of the problem, from detailed memory layout, instruction selection, algorithm choices, stack management, etc. Higher level languages take all this away from you, which is why are so much denser in terms of LOC.
Essentially, all of the above are related to the level of abstraction available to you in assembler versus C or some other language. Assembler forces you to make all of your own abstractions, and to maintain them through your own self-discipline, where any mid-level language like C, and especially higher level languages, provide you with abstractions out of the box, as well as the ability to create new ones relatively easily.
As a developer who spends most of his time in the embedded programming world, I would argue that assembly is far from a dead/obsolete language. There is a certain close-to-the-metal level of coding (for example, in drivers) that sometimes cannot be expressed as accurately or efficiently in a higher-level language. We write nearly all of our hardware interface routines in assembler.
That being said, this assembly code is wrapped such that it can be called from C code and is treated like a library. We don't write the entire program in assembly for many reasons. First and foremost is portability; our code base is used on several products that use different architectures and we want to maximize the amount of code that can be shared between them. Second is developer familiarity. Simply put, schools don't teach assembly like they used to, and our developers are far more productive in C than in assembly. Also, we have a wide variety of "extras" (things like libraries, debuggers, static analysis tools, etc) available for our C code that aren't available for assembly language code. Even if we wanted to write a pure-assembly program, we would not be able to because several critical hardware libraries are only available as C libs. In one sense, it's a chicken/egg problem. People are driven away from assembly because there aren't as many libraries and development/debug tools available for it, but the libs/tools don't exist because not enough people use assembly to warrant the effort creating them.
In the end, there is a time and a place for just about any language. People use what they are most familiar and productive with. There will probably always be a place in a programmer's repertoire for assembly, but most programmers will find that they can write code in a higher-level language that is almost as efficient in far less time.
When you are writing in assembly, you don't have to write just in instruction codes. You can use macros and procedures and your own conventions to make various abstractions to make programs more modular, more maintainable and easier to read.
So what you're basically saying is, that with skilled use of a sophisticated assembler, you can make your ASM code closer and closer to C (or anyway another low-ish-level language of your own invention), until eventually you are just as productive as a C programmer.
Does that answer your question? ;-)
I don't say this idly: I have programmed using exactly such an assembler and system. Even better, the assembler could target a virtual processor, and a separate translator compiled the output of the assembler for a target platform. Much as happens with LLVM's IF, but in its early forms pre-dating it by about 10 years. So there was portability, plus the ability to write routines for a specific target asssembler where required for efficiency.
Writing using that assembler was about as productive as C, and with by comparison with GCC-3 (which was around by the time I was involved) the assembler/translator produced code that was roughly as fast and usually smaller. Size was really important, and the company had few programmers and was willing to teach new hires a new language before they could do anything useful. And we had the back-up that people who didn't know the assembler (e.g. customers) could write C and compile it for the same virtual processor, using the same calling convention and so on, so that it interfaced neatly. So it felt like a marginal win.
That was with multiple man-years of work in the bag developing the assembler technology, libraries, and so on. Admittedly much of which went into making it portable, if it had only ever been targeting one architecture then the all-singing all-dancing assembler would have been much easier.
In summary: you may not like C, but it doesn't mean that the effort of using C is greater than the effort of coming up with something better.
Assembly is not portable between different microprocessors.
The same reason we don't go to the bathroom outside anymore, or why we don't speak Latin or Aramaic.
Technology comes along and makes things easier and more accessible.
EDIT - to cease offending people, I've removed certain words.
Why? Simple.
Compare this :
for (var i = 1; i <= 100; i++)
{
if (i % 3 == 0)
Console.Write("Fizz");
if (i % 5 == 0)
Console.Write("Buzz");
if (i % 3 != 0 && i % 5 != 0)
Console.Write(i);
Console.WriteLine();
}
with
.locals init (
[0] int32 i)
L_0000: ldc.i4.1
L_0001: stloc.0
L_0002: br.s L_003b
L_0004: ldloc.0
L_0005: ldc.i4.3
L_0006: rem
L_0007: brtrue.s L_0013
L_0009: ldstr "Fizz"
L_000e: call void [mscorlib]System.Console::Write(string)
L_0013: ldloc.0
L_0014: ldc.i4.5
L_0015: rem
L_0016: brtrue.s L_0022
L_0018: ldstr "Buzz"
L_001d: call void [mscorlib]System.Console::Write(string)
L_0022: ldloc.0
L_0023: ldc.i4.3
L_0024: rem
L_0025: brfalse.s L_0032
L_0027: ldloc.0
L_0028: ldc.i4.5
L_0029: rem
L_002a: brfalse.s L_0032
L_002c: ldloc.0
L_002d: call void [mscorlib]System.Console::Write(int32)
L_0032: call void [mscorlib]System.Console::WriteLine()
L_0037: ldloc.0
L_0038: ldc.i4.1
L_0039: add
L_003a: stloc.0
L_003b: ldloc.0
L_003c: ldc.i4.s 100
L_003e: ble.s L_0004
L_0040: ret
They're identical feature-wise.
The second one isn't even assembler but .NET IL (Intermediary Language, similar to Java's bytecode). The second compilation transforms the IL into native code (i.e. almost assembler), making it even more cryptical.
I'd guess ASM on even x86(_64) makes sense in cases where you gain a lot by utilizing instructions that are difficult for a compiler to optimize for. x264 for example uses a lot of asm for its encoding, and the speed gains are huge.
I'm sure there are many reasons, but two quick reasons I can think of are
Assembly code is definitely harder to read (I'm positive its more time-consuming to write as well)
When you have a huge team of developers working on a product, it is helpful to have your code divided into logical blocks and protected by interfaces.
One of the early discoveries (you'll find it in Brooks' Mythical Man-Month, which is from experience in the 1960s) was that people were more or less as productive in one language as another, in debugged lines of code per day. This obviously isn't universally true, and can breaks when pushed too far, but it was generally true of the high-level languages of Brooks' time.
Therefore, the fastest way to get productivity would be to use languages where one individual line of code did more, and indeed this works, at least for languages of complexity like FORTRAN and COBOL, or to give a more modern example C.
Portability is always an issue -- if not now, at least eventually. The programming industry spends billions every year to port old software which, at the time it was written, had "obviously" no portability issue whatsoever.
There was a vicious cycle as assembly became less commonplace: as higher level languages matured, assembly language instruction sets were built less for programmer convenience and more for the convenience of compilers.
So now, realistically, it may be very hard to make the right decisions on, say, which registers you should use or which instructions are slightly more efficient. Compilers can use heuristics to figure out which tradeoffs are likely to have the best payoff. We can probably think through smaller problems and find local optimizations that might beat our now pretty sophisticated compilers, but odds are that in the average case, a good compiler will do a better job on the first try than a good programmer probably will. Eventually, like John Henry, we might beat the machine, but we might seriously burn ourselves out getting there.
Our problems are also now quite different. In 1986 I was trying to figure out how to get a little more speed out of small programs that involved putting a few hundred pixels on the screen; I wanted the animation to be less jerky. A fair case for assembly language. Now I'm trying to figure out how to represent abstractions around contract language and servicer policy for mortgages, and I'd rather read something that looks close to the language that the business folks speak. Unlike LISP macros, Assembly macros don't enforce much in the way of rules, so even though you might be able to get something reasonably close to a DSL in a good assembler, it'll be prone to all sorts of quirks that won't cause me problems if I wrote the same code in Ruby, Boo, Lisp, C# or even F#.
If your problems are easy to express in efficient assembly language, though, more power to you.
Ditto most of what others have said.
In the good old days before C was invented, when the only high level languages were things like COBOL and FORTRAN, there were lots of things that just weren't possible to do without resorting to assembler. It was the only way to get the full breadth of flexibility, to be able to access all the devices, etc. But then C was invented, and almost anything that was possible in assembly was possible in C. I have written very little assembly since then.
That said, I think it is a very useful exercise for new programmers to learn to write in assembler. Not because they would actually use it much, but because then you understand what is really happening inside the computer. I've seen lots of programming errors and inefficient code from programmers who clearly have no idea what's really happening with the bits and bytes and registers.
I've been programming in assembly now for about a month. I often write a piece of code in C and then compile it to assembly to assist me. Perhaps I am not utilizing the full optimizing power of the C compiler but it appears that my C asm source is including unnecessary operations. So I am beginning to see that the talk of a good C compiler outperforming a good assembly coder is not always true.
Anyways, my assembly programs are so fast. And the more I use assembly the less time it takes me to write out my code because it's really not that hard. Also the comment about assembly having poor legibility is not true. If you label your programs correctly and make comments when there is additional elaboration needed you should be all set. In fact in ways assembly is more clear to the programmer because they are seeing what is happening at the level of the processor. I don't know about other programmers but for me I like knowing what's happening, rather than things being in a sort of black box.
With that said the real advantage of compilers is that a compiler can understand patterns and relationships and then automatically code them in the appropriate locations in the source. One popular example are virtual functions in C++ which requires the compiler to optimally map function pointers. However a compiler is limited to doing what the maker of the compiler allows the compiler to do. This leads to programmers sometimes having to resort to doing bizarre things with their code , adding coding time, when they could have been done trivially with assembly.
Personally I think the marketplace heavily supports high level languages. If assembly language was the only language in existence today then their would be about 70% less people programming and who knows where our world would be, probably back in the 90's. Higher level languages appeal to a broader range of people. This allows a higher supply of programmers to build the needed infrastructure of our world. Developing nations like China and India benefit heavily from languages like Java. These countries will fast develop their IT infrastructure and people will become more interconnected. So my point is that high level languages are popular not because they produce superior code but because they help to meet demand in the world's marketplaces.
I'm learning assembly in comp org right now, and while it is interesting, it is also very inefficient to write in. You have to keep alot more details in your head to get things working, and its also slower to write the same things. For example, a simple 6 line for loop in C++ can equal 18 lines or more of assembly.
Personally, its alot of fun learning how things work down at the hardware level, and it gives me greater appreciation for how computing works.
What C has over a good macro assembler is the language C. Type checking. Loop constructs. Automatic stack management. (Nearly) automatic variable management. Dynamic memory techniques in assembler are a massive pain in the butt. Doing a linked list properly is just down right scary compared to C or better yet list foo.insert(). And debugging - well, there's no contest on what is easier to debug. HLLs win hands down there.
I've coded nearly half my career in assembler which makes it very easy for me to think in assmebler. it helps me to see what the C compiler is doing which again helps me write code that the C compiler can efficiently handle. A well thought out routine written in C can be written to output exactly what you want in assembler with a little work - and it's portable! I've already had to rewrite a few older asm routines back to C for cross platform reasons and it's no fun.
No, I'll stick with C and deal with the occasional slight slowdown in performance against the productivity time I gain with HLL.
I can only answer why I personally don't write programs in assembly more often, and the main reason is that it's more tedious to do. Also, I think that it is easier to get things subtly wrong without noticing immediately. E.g., you might change the way you use a register in one routine but forget to change this in one place. It'll assemble fine and you may not notice until much later.
That said, I do think there are still valid uses for assembly. For instance, I have a number of pretty optimised assembly routines for processing large amounts of data, using SIMD and following the paranoid "every bit is sacred"[quote V.Stob] approach. (But note that naive assembly implementations are often a lot worse than what a compiler would generate for you.)
C is a macro assembler! And it's the best one!
It can do nearly everything assembly can, it can be portable and in most of the rare cases where it can't do something you can still use embedded assembly code. This leaves only a small fraction of programs that you absolutely need to write in assembly and nothing but assembly.
And the higher level abstractions and the portability make it more worthwhile for most people to write system software in C. And although you might not need portability now if you invest a lot of time and money in writing some program you might not want to limit yourself in what you'll be able to use it for in the future.
People seem to forget that there is also the other direction.
Why are you writing in Assembler in the first place? Why not write the program in a truly low level language?
Instead of
mov eax, 0x123
add eax, 0x456
push eax
call printInt
you could just as well write
B823010000
0556040000
50
FF15.....
That has so many advantages, you know the exact size of your program, you can reuse the value of instructions as input for other instructions and you do not even need an assembler to write it, you can use any text editor...
And the reason you still prefer Assembler about this, is the reason other people prefer C...
Because it's always that way: time pass and good things pass away too :(
But when you write asm code it's totally different feeling than when you code high-level langs, though you know it's much less productive. It's like you're a painter: you are free to draw anything you like the way you like with absolutely no restrictions(well, only by CPU features)... That is why I love it. It's a pity this language goes away. But while somebody still remembers it and codes it, it will never die!
$$$
A company hires a developer to help turn code into $$$. The faster that useful code can be produced, the faster the company can turn that code into $$$.
Higher level languages are generally better at churning out larger volumes of useful code. This is not to say that assembly does not have its place, for there are times and places where nothing else will do.
The advantage of HLL's is even greater when you compare assembly to a higher level language than C, e.g. Java or Python or Ruby. For instance, these languages have garbage collection: no need to worry about when to free a chunk of memory, and no memory leaks or bugs due to freeing too early.
As others mentioned before, the reason for any tool to exist is how efficiently it can work. As HLLs can accomplish the same jobs as many lines of asm code I guess it's natural for assembly to be superseded by other languages. And for the close-to-hardware fiddling - there's inline assembly in C and other variants as per language.
Dr. Paul Carter in says in the PC Assembly Language
"...a better understanding of how
computers really work at a lower level
than in programming languages like
Pascal. By gaining a deeper
understanding of how computers work,
the reader can often be much more
productive developing software in
higher level languages such as C and
C++. Learning to program in assembly
language is an excellent way to
achieve this goal."
We've got introduction to assembly in my college courses. It'll help to clear concepts. However I doubt any of us would write 90% of code in assembly. How relevant is in-depth assembly knowledge today?
Flipping through these answers, I'd bet 9/10 of the responders have never worked with assembly.
This is an ages old question that comes up every so often and you get the same, mostly misinformed answers. If it weren't for portability, I'd still do everything in assembly myself. Even then, I code in C almost like I did in assembly.

How long to learn C? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm a C# programmer and I'm sold on the benefits of learning C. I want to deepen my knowledge of the underlying OS and CPU, understand the pain of memory management that garbage collection encapsulates away and generally improve my high-level programs thanks to an appreciation of the low-level issues that the compiler is dealing with on my behalf.
My question is how long can I expect to spend learning the C language in order to gain these benefits?
Is a couple of weekends spent reading the K&R book from cover to cover sufficient, or do I need to schedule time to cut some code? Do I need to spend time delving into any libraries, or is an understanding of the first-order concepts in the language enough to improve my C# code?
To be clear, I don't intend to write any significant programs in C. My goal is more to learn from the language than to become an expert in the language.
C will take a week to learn, and a lifetime to master.
Reading a K&R book and not writing code is like reading a book on weapons and never actually shooting. Yes, you've read in a book, that it works this way, but you have never encountered the typical problems that arise while doing this. Without practice such "knowlegde" is worth very little.
Plan to spend 2-3 years slowly writing small programs for solving different tasks in C. This will count as real experince. C provides delayed gratification for your effort.
I'm not sure how long it takes to learn a language - it probably comes down to the individual. But I'm pretty confident you can't learn one without writing and debugging code in it.
Ten Years
If you can read K&R and understand it all, that's pretty good, as K&R covers pretty much all of the language.
However, reading it and understanding it all are very different. You should probably take a few passes through K&R and do all the associated exercises to ensure you really know it.
Even after reading through all of that, you will spend more months learning pointers the hard way. Expect lots of seg faults. On the plus side though, you'll get really good at reading hex!
There are a few caveats that the language has that you'll find out as well. One that used to give me trouble is that all pointers are the same size (4 bytes on x86), regardless of what they point at. A char* is the same size as a void* and an int*.
It will take a lot lot longer if you just sit around asking abstract questions and not actually diving in and doing it. Do you have a deadline or something? How long will it take me to learn the piano? Who cares, I just wanna make some noise. That's how kids learn so fast. They don't care about becoming an expert, or even good. They just like to play.
In any case, if you want to learn some interesting things, try some assembler as well. A lot of people really hate it, but that's just because they don't like spending countless hours not accomplishing much. I like it just fine.
You definitely need to write some code - I don't believe you can learn any language without doing that. K&R has lots of exercises you can practice on. It's difficult to know how long in terms of elapsed time it will take to get a good working knowledge - I used to teach pretty much the whole language in 4.5 days, but that is quite intensive. I'd suggest about a month, if you are doing an hour or so a day.
Edit: I must admit, I find it a bit depressing that so many people think C is so difficult. K&R is 272 pages long, in my copy, and covers basically everything you need to know, including the standard library. Is there book in ANY other programming language that covers the whole shebang so concisely? I don't think so, and the reason is not that K&R is compressed in some way (Brian Kernighan is THE greatest techical writer, IMHO) but that the language is simple and easy to describe.
I read the K&R book cover to cover and would not say I have any great understanding of C. Some time doing the exercises in K&R would be hugely beneficial.
I'm sure C libraries would make you more productive writing programs, but if it is simply learning C you are interested in, then you can implement anything yourself that you need. www.projecteuler.net is a good source of problems (although slightly mathematical in general) for you to get started on, if you fancy trying some coding outside of the K^R exercises.
In a couple of weekends, you will obtain mainly two results:
hello world
a lot of segmentation faults
C is not easy, in particular if you are not used to its hardcore concept. You will have to invest weeks, even months in tinkering with it, to grasp the most obscure (but still not too much) essence.
40 days and 40 nights.
If you can't do the days and nights sequentially, then it will be 42 weekends.
But seriously, without putting any context on how fast you learn other topics, nobody can give you a real answer that is relevant to you. We can say how long it took us to learn it to a satisfying level, but that has zero correlation to how long it should take you to learn it.
If you said it took you 6 months to be good at C#, then maybe we can say it should take you 6 months * X (where X is still a guess, but a better guess than now).
We can all agree, however, that just reading the book is not enough. Of course you will have to write code. That is how we best learn anything - read it, write it, teach it. If you really want to learn something, teach it.
To understand the pain of memory management just being writing sample programs with stacks, linked lists, binary trees, etc. You'll see what you're getting into.
In school i was taught C as the introductory language and as pointers got introduced a whole slew of individuals dropped the class because frankly it's a hard concept to grasp.
As many of the other answers have stated... Plan to not only read but practice. There's no doubt that you haven't learned alot from C# by just making mistakes while coding and having 'aha!' moments.
IMO: 3 to 4 years to really understand the majority of concepts. A book will help you realize what the capabilities of the language are.

Resources