Where can I find a C programming reference? [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Where can I find a C programming reference that will list the declaration of built-in functions?

"The C Programming Language"

You can either buy the ISO C standard (drafts are free), or C: A Reference Manual, by Harbison and Steele. Both are very good. The Standard C Library, by P.J. Plauger, is a good book about implementing the standard C library. All of the above have the prototypes of the standard functions in them.

Grab the K&R book, if you don't already have it.
Also, this page looks like it might be a good start for you.

I read Gottfred for C. That was very good to start about programming.

I've always found this website to be useful for C programming;
CPlusPlus.
Even though the title and the website says C++, it has a section for C Reference, with lots of examples.

For quick reference, I like the OpenGroup site
http://pubs.opengroup.org/onlinepubs/9699919799/
It contains the Standard and the POSIX functions with their differences in case that is needed.

Here's a list of the best books. Chose that one that fits specific your needs the best.
A Tour of C++ (Bjarne Stroustrup) The "tour" is a quick (about 180 pages and 14 chapters) tutorial overview of all of standard C++ (language and standard library, and using C++11) at a moderately high level for people who already know C++ or at least are experienced programmers. This book is an extended version of the material that constitutes Chapters 2-5 of The C++ Programming Language, 4th edition.
The C++ Programming Language (Bjarne Stroustrup) (updated for C++11) The classic introduction to C++ by its creator. Written to parallel the classic K&R, this indeed reads very much alike it and covers just about everything from the core language to the standard library, to programming paradigms to the language's philosophy. (Thereby making the latest editions break the 1k page barrier.) [Review] The fourth edition (released on May 19, 2013) covers C++11.
C++ Standard Library Tutorial and Reference (Nicolai Josuttis) (updated for C++11) The introduction and reference for the C++ Standard Library. The second edition (released on April 9, 2012) covers C++11. [Review]
The C++ IO Streams and Locales (Angelika Langer and Klaus Kreft) There's very little to say about this book except that, if you want to know anything about streams and locales, then this is the one place to find definitive answers. [Review]
C++11 References:
The C++ Standard (INCITS/ISO/IEC 14882-2011) This, of course, is the final arbiter of all that is or isn't C++. Be aware, however, that it is intended purely as a reference for experienced users willing to devote considerable time and effort to its understanding. As usual, the first release was quite expensive ($300+ US), but it has now been released in electronic form for $60US
Overview of the New C++ (C++11/14) (PDF only) (Scott Meyers) (updated for C++1y/C++14) These are the presentation materials (slides and some lecture notes) of a three-day training course offered by Scott Meyers, who's a highly respected author on C++. Even though the list of items is short, the quality is high.
Beginner
Introductory
If you are new to programming or if you have experience in other languages and are new to C++, these books are highly recommended.
C++ Primer * (Stanley Lippman, Josée Lajoie, and Barbara E. Moo) (updated for C++11) Coming at 1k pages, this is a very thorough introduction into C++ that covers just about everything in the language in a very accessible format and in great detail. The fifth edition (released August 16, 2012) covers C++11. [Review]
Accelerated C++ (Andrew Koenig and Barbara Moo) This basically covers the same ground as the C++ Primer, but does so on a fourth of its space. This is largely because it does not attempt to be an introduction to programming, but an introduction to C++ for people who've previously programmed in some other language. It has a steeper learning curve, but, for those who can cope with this, it is a very compact introduction into the language. (Historically, it broke new ground by being the first beginner's book using a modern approach at teaching the language.) [Review]
Thinking in C++ (Bruce Eckel) Two volumes; second is more about standard library, but still very good
Programming: Principles and Practice Using C++ (Bjarne Stroustrup) An introduction to programming using C++ by the creator of the language. A good read, that assumes no previous programming experience, but is not only for beginners. (The 2nd Edition (updated for C++11) is coming.)
Not to be confused with C++ Primer Plus (Stephen Prata), with a significantly less favorable review.
Best practices
Effective C++ (Scott Meyers) This was written with the aim of being the best second book C++ programmers should read, and it succeeded. Earlier editions were aimed at programmers coming from C, the third edition changes this and targets programmers coming from languages like Java. It presents ~50 easy-to-remember rules of thumb along with their rationale in a very accessible (and enjoyable) style. [Review]
Effective STL (Scott Meyers) This aims to do the same to the part of the standard library coming from the STL what Effective C++ did to the language as a whole: It presents rules of thumb along with their rationale. [Review]
Intermediate
More Effective C++ (Scott Meyers) Even more rules of thumb than Effective C++. Not as important as the ones in the first book, but still good to know.
Exceptional C++ (Herb Sutter) Presented as a set of puzzles, this has one of the best and thorough discussions of the proper resource management and exception safety in C++ through Resource Acquisition is Initialization (RAII) in addition to in-depth coverage of a variety of other topics including the pimpl idiom, name lookup, good class design, and the C++ memory model. [Review]
More Exceptional C++ (Herb Sutter) Covers additional exception safety topics not covered in Exceptional C++, in addition to discussion of effective object oriented programming in C++ and correct use of the STL. [Review]
Exceptional C++ Style (Herb Sutter) Discusses generic programming, optimization, and resource management; this book also has an excellent exposition of how to write modular code in C++ by using nonmember functions and the single responsibility principle. [Review]
C++ Coding Standards (Herb Sutter and Andrei Alexandrescu) "Coding standards" here doesn't mean "how many spaces should I indent my code?" This book contains 101 best practices, idioms, and common pitfalls that can help you to write correct, understandable, and efficient C++ code. [Review]
C++ Templates: The Complete Guide (David Vandevoorde and Nicolai M. Josuttis) This is the book about templates as they existed before C++11. It covers everything from the very basics to some of the most advanced template metaprogramming and explains every detail of how templates work (both conceptually and at how they are implemented) and discusses many common pitfalls. Has excellent summaries of the One Definition Rule (ODR) and overload resolution in the appendices. A second edition is scheduled for 2015. [Review]
Advanced
Modern C++ Design (Andrei Alexandrescu) A groundbreaking book on advanced generic programming techniques. Introduces policy-based design, type lists, and fundamental generic programming idioms then explains how many useful design patterns (including small object allocators, functors, factories, visitors, and multimethods) can be implemented efficiently, modularly, and cleanly using generic programming. [Review]
C++ Template Metaprogramming (David Abrahams and Aleksey Gurtovoy)
C++ Concurrency In Action (Anthony Williams) A book covering C++11 concurrency support including the thread library, the atomics library, the C++ memory model, locks and mutexes, as well as issues of designing and debugging multithreaded applications.
Advanced C++ Metaprogramming (Davide Di Gennaro) A pre-C++11 manual of TMP techniques, focused more on practice than theory. There are a ton of snippets in this book, some of which are made obsolete by typetraits, but the techniques, are nonetheless, useful to know. If you can put up with the quirky formatting/editing, it is easier to read than Alexandrescu, and arguably, more rewarding. For more experienced developers, there is a good chance that you may pick up something about a dark corner of C++ (a quirk) that usually only comes about through extensive experience.
Classics / Older
Note: Some information contained within these books may not be up-to-date or no longer considered best practice.
The Design and Evolution of C++ (Bjarne Stroustrup) If you want to know why the language is the way it is, this book is where you find answers. This covers everything before the standardization of C++.
Ruminations on C++ - (Andrew Koenig and Barbara Moo) [Review]
Advanced C++ Programming Styles and Idioms (James Coplien) A predecessor of the pattern movement, it describes many C++-specific "idioms". It's certainly a very good book and still worth a read if you can spare the time, but quite old and not up-to-date with current C++.
Large Scale C++ Software Design (John Lakos) Lakos explains techniques to manage very big C++ software projects. Certainly a good read, if it only was up to date. It was written long before C++98, and misses on many features (e.g. namespaces) important for large scale projects. If you need to work in a big C++ software project, you might want to read it, although you need to take more than a grain of salt with it. There's been the rumor that Lakos is writing an up-to-date edition of the book for years.
Inside the C++ Object Model (Stanley Lippman) If you want to know how virtual member functions are commonly implemented and how base objects are commonly laid out in memory in a multi-inheritance scenario, and how all this affects performance, this is where you will find thorough discussions of such topics.

Best book for C for beginners is Let Us C by Yashwant Kanetkar

My recommendation to beginners in c programming is "Let Us C" by "Yashwant Kanetkar"

Related

Brief Explanation of C Supersets?

I'm getting more and more confused in regards to C's supersets the further I venture into the programming world. There's just so many versions.. C, C++, C#, Objective-C, Objective-C++ and God knows what else.
I only know tidbits about these languages (some are object-oriented, some are procedural, C was originally developed for UNIX, C++ started as an extension and is used primarily on the Windows OS, Objective-C is primarily used on Linux and Mac OS/iOS, etc), but I'm not even sure that what I know is correct.
I would just like someone to shed some light on what I "know" - a little bit more information about which are successive versions, which platforms each are generally used on, which are the best versions to learn, etc if anyone is feeling generous. :)
Update
Also, I hope to eventually start native (without needs of plugins, such as the .NET framework) application development for Windows and Mac, so can anyone confirm that I would need to learn C++ for Windows and Objective-C for Mac?
C++ started life as "C with classes", in which Bjarne Stroustrup working at AT&T sought to add features like methods and inheritance to C structures in a way that requires as little support from a runtime library as possible. The classes added to "C with classes" are very heavily influenced by the Simula language. It's since grown to include a number of other features, some of the important ones being generics, lambda functions and an expressive standard type library.
Objective-C started life not too dissimilar from its current state, except that it was originally implemented as a C preprocessor. Brad Cox, at his company Stepstone, wanted to combine the power of the Smalltalk object-oriented model with the performance of C native execution. Objective-C uses a dynamic message-passing system to dispatch calls to objects, a design that's directly opposed to C++'s goal of doing everything in the compiler. Thus while objc and c++ start from the same base, the results are very different.
Both of the above language authors also published books explaining the design intention behind their respective languages. Cox's book is long out of print, but both are worth reading if you get the chance. Stroustrup's publications list
Objective-C++ relies on the fact that GCC (or LLVM) can generate code in either language, so the compiler writers allow you to use features from both in the same source file. There are some limits on what's possible at the boundary layer, and Objective-C++ is mainly used either to adapt a C++ library to an Objective-C app or to use STL or Boost data types from C++ inside an Objective-C class.
Finally, languages like C# and Java are not C supersets or C descendants at all. They use C-like syntax to provide some familiarity (and perhaps to avoid having to think about designing a de novo language syntax) but are different beasts that work in a very different way.
You're mixing up the name of the language and what that was meant to imply.
C was first, procedural and low level non-generic (at least not type-safe) as you may know.
Then C++ originated as C with classes, and templates. Later this evolved into something that sets itself apart from C more and more, expanding its standard library in each step. C++ is a multi-aradigm language, meaning it has stuff for procedural, generic, functional (a little) and object-oriented programming. Don't forget about Boost as a powerful extension to the Standard library.
Objective-C is a C expanded with numerous runtime facilities (like messages) to ease object-oriented development. Objective-C++ is the same extensions applied to C++, staying compatible as much as possible with the original language.
C# is something entirely different. Think of it as a hybrid between Java and C++, although that will set bad blood with users of all three languages. It has a garbage collector.
(Objective-)C used primarily on Linux is not entirely correct. All of KDE is C++, for one. C++ primarily on Windows isn't entirely true, the OS API is all C. What Microsoft made of a C++ API (MFC/ATL) is a living hell. Microsoft is pushing C#, along with associated .Net stuff a lot. So think of C# as mostly Windows, although there is ongoing effort to create a cross-platform .Net alternative (mono). Apple's Foundation APIs are Objective-C, but the underlying OS APIs are all still C.
"Supersets" is a wrong terminology. C is not a subset of C++. Both share a common subset. Only Objective-C(++) is a pure superset of C(++).
As to the best versions to learn: the latest, obviously, perhaps limited to the compiler support on the various platforms you will write for.

AI library framework in Ada

I'm looking for an Ada constructed framework for AI. I think Ada would be perfect for implementing temporal and stochastic paradigms due to its tasking and real-time mechanisms, but did not find anyone who tried to make such a libraries. Actually I did not find strong implementations on other languages too. For C# I found http://www.c-sharpcorner.com/1/56/, and for C++ I found http://mind.sourceforge.net/cpp.html but both did not get much popularity. Maybe java has good AI libraries too, but I do not know. So, do you know an Ada implementation? Would it be useful for more anyone? If you know libraries from other languages, it would be useful to know and compare the implementation models in java, for example. Thanks.
Here's a few resources:
Book, rather old, though (1989): Artificial Intelligence With Ada
Looks like some kind of university student dissertation: MUTANTS: A generic genetic algorithm toolkit for Ada 95
Dmitry Kazakov's AI stuff, mostly fuzzy logic. (Dmitry writes really nice software.)
I once had a school AI project that used the CLIPS AI builder library.
Since I avoid coding in C where I don't have to, I made an Ada Binding to it, which I believe is licensed without restriction. If you want it, have at.
I used it to build an expert system capable of playing a user's opening moves in Empire. All the code is either in Ada, or Clips' expert system specification language.
Here's a potentially useful Java library. I haven't heard of any Ada libraries. Ada is a great language, though.
Here's some genetic stuff.

How to pick a language for Artificial Intelligence programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the best programming language for artificial intelligence purposes?
Mind that using suggested language I must be able to employ any AI technique (or at least most of them).
All the cool bearded gurus in what's left of AI research use Lisp :)
There are two big camps: Common Lisp and Scheme. They have different syntax, etc. Lots of good stuff written for both.
Java is a very popular all-purpose language but a lot of the interesting stuff in AI / Functional Programming, such as passing closures as first-order objects, is clumsy to do in Java.
My personal preference would be to stay away from Windowsy languages like C# and F#. Cool people develop under Unix. Or Linux if they're cool but poor.
Some cool but weird people program in Haskell. A reasonably modern FP language with good performance. I tried it once, it made my brain hurt; but you might be smarter than I am.
UPDATE: Answers to Steve's questions.
I wouldn't be the one paying for a Unix variant; that's what corporations and research institutes do. The idea is, you want to be doing AI research for an outfit that sinks millions into their hardware and doesn't balk at paying a few thousand for an operating system. That's the kind of outfit likely to have good food in the cafeteria and/or pay well for doing fun work. But I'm certainly not knocking Linux.
F# may be cool but I see a whole raft of issues getting it to run on Linux or any other Unix (that's what I meant by "windowsy"), and I don't want to work under Windows (that's what I meant by "personal preference").
To elaborate on the "windowsy" theme: You mention that F# is an OCaml variant. From my own admittedly brief research, it seems that F# is missing functors, OCaml-style objects, polymorphic variants and the camlp4 preprocessor. A functional language without functors? Really? If one were disposed to not like Microsoft, as I admittedly am, one could conclude that they had gone ahead and crowbarred a perfectly good functional language, OCaml, into something they could get to run in their CLR so they could claim to "have" a functional language. Finally, because I don't suspect, I know that Microsoft always prioritizes market dominance over product quality, I don't plan to touch F#. But this is my personal preference, and clearly identified as such, while we're really more concerned with making a good recommendation for mary.ja45 .
I have better reasons to recommend Lisp over F# and even OCaml and Haskell. These are mostly based on the historic preponderance of Lisp over any other language in the AI field.
The bulk of AI literature is based on programs written in Lisp or Prolog. If nothing else, good knowledge of Lisp would allow a student to understand the sample programs. My personal favorite AI megaproject, Cyc, has runtimes in your choice of Common Lisp or C.
In the TIOBE index of programming language (as seen and used in industry), Lisp takes 15th place while Haskell takes 43rd and F# and OCaml place below 50th. Presence on the market correlates with employment opportunities, naturally.
That said, it's quite possible that a number of the younger "AI interesting" languages are poised to skyrocket. If some major research institute published some groundbreaking, defining-the-field research in, say, Scala, you'd see Scala's popularity advance sharply in the research community and, with some lag, in industry.
I (obviously) can't comment on F#'s other qualities but you're as welcome to make recommendations as I was.
Python seems to be used a lot in the general scientific community. It has a lot of libraries available and it's easy to learn.
I'll throw Scala into the pot.
it's usable for functional programming
it can be made as fast as Java
it's a modern language with lot's of nice aspects
Java seems to be a bit popular in AI, too and so you can use all those Java libraries from Scala
I've solved all exercises from a basic AI course in Scala. It worked really well.
If by "all of AI" you also mean machine learning, which I guess, Matlab, R and Python+Scipy should definately be mentioned.
I personally use Clojure for AI programming, and have found it to be a great all-rounder AI language.
Reasons:
It's a Lisp, and Lisps have historically been very strong the the AI field
It's a homoiconic language with powerful macros, so great for code generation and genetic programming. This is a surprisingly useful property for AI programming (and possibly explains some of the success of Lisp in general in this space)
It runs on the JVM and can easily access all the Java libraries for number crunching (Weka, Colt, etc.).
It's good for rapid interactive development - it's very dynamic and you can do pretty much everything interactively in a running Clojure REPL. No need for recompiling etc.
It matters probably whether the programming environment is academic or not, but for most non-academic AI application development I would recommend sticking with a mainstream language like Java or C++. One needs to be able to interface readily with other COTS or open-source software packages, and this can sometimes be difficult or impossible in more "exotic" languages. For academic work this may be a less critical issue.
Additionally, performance can be important for many applications, and mainstream languages generally have the most heavily-optimized compilers, e.g., C++ or Java.
It is true that functional programming languages like LISP, Scheme, etc have specialized features that may make it easier to implement particular AI methods, but I do not believe this to be true for AI-related programming as a whole, e.g., quantitative machine learning methods usually don't require a functional language. If you need access to both functional constructs and general software packages, there are some tools for LISP to help with this, and the recently-developed Clojure is a LISP-variant that runs on the JVM and can access Java libraries. Also, Groovy is another JVM-based language that includes support for closures.
Lastly, some programmers like paradigm flexibility and/or fast prototyping for AI projects. Ruby and Python both see some AI-related usage for this reason as multi-paradigm languages that can also be used for scripting.
Like most things in programming, the best answer for which language to use in AI development will ultimately depend on the needs of your projects.
It really depends on what kind of problem you are looking at. Also, how "deep" you want to go into AI stuff. If you want to learn from the basics and just implement theoretical AI stuff, go with a higher level language-- as in functional programming (and proven in AI) like lisp, or prolog. If you know what problem set you are dealing with and want efficient, go with something like Java, C++ and use a toolkit to do the stuff.
Since you mention Machine Learning look into Weka Toolkit in Java for some of these stuff.
Pick the programming language with AI techniques the same way you pick a language for any other project:
What is the problem you are trying to
solve?
Is there good support available for
the language?
What are the customers requirements?
I would recommend Prolog as a very good programming language used to implement AI systems.
There is no "best" language. Each one has its merits. When I studied AI, mostly we worked with lisp and prolog, but I've been most productive in AI with Java/C# and F# has a lot to offer.
How about a framework written in Java, supporting "High Level Logic" and agent style communication.
http://highlevellogic.blogspot.com/2010/11/when-will-we-have-artificial.html
It also depends on the size of your dataset. For web-scale datasets you may want to use Map-Reduce and that implies Hadoop. Hadoop is in Java -- but you could use any language (Python, etc.) for your Map-Reduce functions.
There is also a java framework called weka, developped by the university of waikato. I don't know wether it anwsers your question, but it may help.
Quoting wikipedia: «Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization» and more.

Does any programmer have to know C? Yes, why? No, why? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
since I was at the first year of my University I always envied my fellows (mainly coming from a tech-oriented professional school) for knowing C. I came from a natural-sciences-oriented lyceum and never had programming experience or courses but some summer work with PHP learned from a teach-yourself-PHP-in-7-hours (and my programming interest was very recent). I want to know ... was it a legitimate envy? Does any programmer have to know C? Does C provide deep understanding on how a system works or how programming has to be carried out? I know that when programming C you have to have a strong understanding about buffers, memory, and so on. So I want your opinion on it.
No, you don't have to know C. But knowledge of C (or any other "close to the machine but not assembler" language) greatly enhances your potential as a programmer. Because you will understand a lot more of the inner workings.
And of course knowledge of assembler is also valuable. But in the above paragraph, I wanted to target the low end of computer language. Just because in modern languages, we take so much for granted (OO, extensive libraries, garbage collection to name a few). And yes it helps us programmers to work more efficiently. But it hides some of the machine aspects that we sometimes need to see and that's why it is so important that we need to know the inner workings.
While I think it is important to understand memory allocation, pointers, registers, ... I would say that too much experience in C can also be a barrier to grasp higher level languages, or OO languages. C tends to make you think procedurally and that can be a bad thing in some cases ...
I would definitely not recommend C as a first language, but it can be of great help to do some C at some point...
As a side note, I think that assembly can be much more useful to help you understand basic principle. At the same time assembly is far enough from any language that you would use (unless you work in a very specialized field). That will help you keep a different mindset when doing assembly than when using a higher level language.
This is a quite discussable topic. I personally think that knowing C enhances your ability to work with other languages too when you understand what's going on under the hood. But you don't have to know C to be able to produce high quality code.
Eric Sink has also once thought of that question.
Something I find missing from most of these answers is that C is a very easy language to learn. Everything you'll ever need to know about the syntax is contained in one thin, concise book (K&R), and that includes all the standard libraries. So I'd encourage you to at least skim a book and see what it's all about, even if you don't intend to use it.
That simple C syntax only gets expanded for most modern C-based languages (C++, C#, Java). You can't say you really know those languages until you've mastered at least a subset of the hundreds of libraries that come with them, and that can take months or years of experience.
What's tough about C is that it can expose you to the true nature of the machine underneath. If you really want to grok how a computer works, you need to understand things like pointers, memory allocation, and stack vs. heap vs. executable code. You can learn basic C syntax in a few hours, and that puts you on the road to understanding much more. Saying "I'm a C expert" is just a proxy for saying, "I really understand how a computer works."
You don't necessarily have to know C, but imo every programmer should know about basic machine architecture and how applications interact with the OS and the hardware.
Obviously if you're going to study this, C is a good choice for a language, but not the only option.
Another good reason to know C is that a lot of code is written in C so if you want to learn from others code, it will be very helpful.
In addition to Gamecats answer, in my experience working with people in other languages, there is a difference in skill between the guys that know C and the guys that don't. I work primarily in Java and certainly appreciate having spent a few years working with C before I did. On top of that I also did quite a bit of Perl work as well. I would say knowing as many languages as possible helps to give you different views on your work and applying different paradigms
C is not my language of choice, but even to this day, C is everywhere.
When I do some small code in Lua using LuaCurl, I use a C library. Lua itself is written in C.
When I do some Seaside Web application in Squeak Smalltalk, I use a VM generated in C (the Squeak VM is written in Smalltalk, and then it generates C code as a portable assembler).
So I would not start learning programming with C (see this thread for other choices), but as a programmer, knowing C is very handy even if it is not your language of choice.
In the same way that not every mechanic needs to know the inner workings of an engine to fix a car, not every programmer needs to learn C to produce code.
however, the ones who do, acquire a better understanding of the craft and ultimately achieve a higher level of success.
I share the same sentiments as the others in this thread, however, to answer the question that was asked:
The only programmers that have to know C are C programmers.
All knowledge is useful, so yes, you should envy their knowledge. You should also envy people who are AI nerds and know LISP, etc. The best mix would be a dynamic language, a functional language, SQL, a low level language and an object oriented language.
If you want some stranger to make some recommendations, I would go Python, OCaml, SQL, C and Java/C#. But, find your own path :-)
I think you have.
Languages have their own evolution. They developed within a very intriguing and fast evolution of computer systems. CPU power grew, features grew, Assembler got more complex... everything got more powerful.
Thing is: if you never saw the low level and "easy" beginnings, and you start with some high-level languages like C#, C++, or Java, you won't understand the elegance or backend perspective of these very powerful languages.
I think you don't need to learn LISP, because if differs a lot from common C-like languages. But some C is a must-know. It's for developers from developers, very near to machine code. Know what the machine does when you program it.
Memory allocation and pointers.
After getting to handle C and C++, even if only in school, you have a better understanding of what needs to happen in memory in order for you to throw objects and references around and you get a better appreciation of what, for instance, garbage collection implies.
Also, in school, starting with Pascal and then C allowed us to learn "programming" first, the old way, and then move to more advanced languages (OOP, etc) on top of that.
Have to know/learn C? Probably not, although it might make learning some concepts easier. Understanding something about memory allocation, structures and pointers of all kinds is worthwhile and C is a good language to use to gain that understanding. Plus, to be honest, "straight" C is really not complex. Tricky to get right, sure, but not in itself complex. I'd advise getting hold of a compiler that wasn't a C++ one too, that way can lie madness. (Fond memories of Quick C For Windows)
Have to know/learn C++? Definitely not.
I would use a metaphor: Knowing C for programmers is like knowing latin for (western languages) writers. It is not something you need, especially if you just write sports columns or cooking recipe books, but if you want to refine your craft it is something that I would consider nearly mandatory. But it will not be useful for daily work on ordinary software.
Or knowing mechanics for a car pilot, or how to build sails for a sailor. At some level of expertise, you need to know how the things you use are working internally.
If you learn C, try to master the pointer concept, and the way they map to the hardware. That's really the point of learning C. Do not spend time on the rest of the language.
Maybe it was just my particular educational experience, but I hated C/C++ in college and haven't touched it since. I'm thankful for learning about the concepts involved, like pointers and memory allocation, but trying to accomplish anything with the tools that were available to me was too cumbersome for me to want to bother. I hope your experience is better.
("slightly" tongue-in-check) You should learn it if, for no other reason, than it will make you love whatever language you're working in at that moment.
I don't call myself a C programmer, but I can write code in C. It has helped me a number of times in my career. I've spent a lot of time working with Visual Basic, and there are some things you just can't do with VB. It's been very handy to drop down to C to do things like windows hooks. It's made me the "hero" a time or two.

Should .NET developers *really* be spending time learning C for low-level exposure? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
When Joel Spolsky and Jeff Atwood began the disagreement in their podcast over whether programmers should learn C, regardless of their industry and platform of delivery, it sparkled quite an explosive debate within the developer community that probably still rages amongst certain groups today. I have been reading a number of passages from a number of programmer bloggers with their take on the matter. The arguments from both sides certainly carry weight, both what I did not find is a perspective that is uniquely angled from the standpoint of developers focused on just the .NET Framework. Practically all of them were commenting on a general programmer standpoint.
What am I trying to get at? Recall Jeff Atwood's opinion that most of the time developers at such high levels would spend would be on learning the business/domain, on top of whatever is needed to learn the technologies to achieve those domain requirements. In my working experience that is a very accurate description of the work life of many. Now supposing that .NET developers can fork the time for "extra curricular" learning, should that be C?
For the record, I have learnt C back in school myself, and I can absolutely understand and appreciate what the proponents are reasoning for. But, when thinking things through, I personally feel .NET developers should not dive straight into C. Because, the thing I wish more developers would take some time to learn is - MSIL and CLR.
Maybe I am stuck with the an unusual bunch of colleagues, I don't know, but it seems to me many people do not keep a conscious awareness that their C# or VB code compiles in IL first before JIT comes in and makes it raw machine code. Most do not know IL, and have no interest in how exactly the CLR handles the code they write. Reading Jeffrey Richter's CLR via C# was quite a shocker for me in so many areas; glad I read it despite colleagues dismissing it as "too low level". I am no expert in IL but with knowledge of the basics, I found myself following his text easier as I was already familiar with the stack behaviour of IL. I find myself disassembling assemblies to have a look at how the IL turns out when I write certain code.
I learn the CLR and MSIL because I know that is the direct layer below me. The layer that allows me to carry out my own layer of work. C, is actually further down. Closer to our "reality" is the CLR and MSIL. That is why I would recommend others to have a go at those, because I do not see enough folks delving at that layer. Or, is your team already all conversant with MSIL?
Of course you should. The greatest way to become overly specialized and single-minded (and, correspondingly, have limited marketable skills) is to only work with a single type of language and eschew all others as "not related to your current task."
Every programmer should have some experience with a modern JIT'd OO language (C#/Java), a lower-level simpler language (C, FORTRAN, etc), a very high level interpreted language (Python, Ruby, etc), and a functional language (Scheme, Lisp, Haskell, etc). Even if you don't use all of them on a day-to-day basis, the broadening of your thought process that such knowledge grants is quite useful.
I already know C and that helped me during the 1.1 days where there are a lot of things that are not yet in the .NET base libraries and I have to P/Invoke something from the Platform SDK.
My take is that we should always allocate a time for learning something that we don't know yet. To answer your question, I don't think it is essential for you to learn C but if you have some time to spare, C is a good language to learn and is just as valid as any other language out there.
True, C is way below the chain. Knowing MSIL can help devs understand how to optimise their apps better. As for learning C or MSIL, why not both? :)
.NET developers should learn about the CLR. But they should also learn C. I don't see how anybody can really understand how the CLR works without some low-level understanding of what happens on the bare metal.
Spending time learning about higher-level concepts is certainly beneficial, but if you concentrate too much on the high-level at the expense of the low-level, you risk becoming one of those "architect" people who can draw boxes and lines on whiteboards but who are unable to write any actual code.
What you learn by learning C will be useful for the remainder of your career. What you learn about the CLR will become obsolete as Microsoft changes their platform.
My take is that learning some compiled language and assembly is a must. Without that, you will not get the versatility required to switch between languages and stacks.
To be more specific -- I think that any good/great programmer must know these things by direct experience:
What is the difference between a register and a variable?
What is DMA?
How is a pixel put on the screen (at low level)?
What are interrupts?
...
Knowing these things is the difference between working with a system you understand and a system that, for all you know, works by magic. :)
To address some comments
You end up having two different kinds of developers:
people that can do one thing in 10 ways in one or two languages
people that can do one thing in one or two ways in 10 different languages
I strongly think that the second group are the better developers overall.
I think of it like this:
Programmers should probably be actually working in the highest-level language appropriate. What's appropriate depends on your scenario. A device driver or embedded system is in a different class from a CRUD desktop app or web page.
You want your programmers to have as much practice as possible in the language in which they are working.
Since most programmers end up working on generic desktop and web apps, you want programming students to move into the higher level languages as soon as possible during school.
However, the higher-level languages obfuscate a few basic programming problems, like pointers. If we apply our principle of using what's appropriate to students as well, those higher level languages may not be appropriate for first year students. That throws out Java, .Net, Python, and many others.
So students should use C (or better yet: C++ since it's "higher-level" and covers most of the same concepts) for the first year or two of school to cover basic concepts, but quickly move up to a higher-level language to enable more difficult programs earlier.
To be sufficiently advanced in writing C#, you need to understand the concepts in C, even if you don't learn the language proper.
More generally though, if you're serious about any skill, you should know what goes on at least one abstraction level below your primary working level.
Coding in jQuery should be paired with an understanding of JavaScript
Designing circuits necessitates knowing physics
Any good basketball player will learn about muscles, bones, and nutrition
A violinist will learn about the interplay of rosin, friction, bow hairs, string, and wood dryness
I like to learn a new language every year. Not necessarily to master it, but to force my brain to think in different ways.
I feel learning C is a good language to learn about low level concepts without the pain of coding in assembly.
However I feel that learning lessons from languages like Haskell, python, and even arguably regex (not exactly a language, but you catch my drift?) is as important as the lessons to be gleaned from C.
So I say, learn about the CLR and MSIL on the job if thats your area, and in your spare time, try picking up a different language once every so often. If that happens to be C this year, good for you and enjoy playing with pointers ;)
I don't see any reason why they should. Languages like Java and C# were designed so that you needn't worry about the low-level details. That's the same like asking whether a WinForms developer should spend time learning the Win32 API because that's whats happening underneath.
While it doesn't hurt to learn it, you'd probably gain more from spending more time learning the languages and platforms you are familiar with, unless there's a good need to learn the low-level technical details.
It can't be a bad idea to learn MSIL, but in a way it's just another .NET language, but with nasty syntax. It is another layer down, though, and I think people should have at least some vague understanding of all the layers.
C, being somewhat like assembly language with nicer syntax, is a nice way to get an idea of what's happening on quite a low level (although some things are still hidden from you).
And from the other end, I think everyone should know a bit of something like Haskell or Lisp to get an idea of higher-level stuff (and see some of the ideas being introduced in C# 3 in a cleaner form)
If you consider yourself a programmer, I would say yes, learn C.
Many people who write code do not consider themselves programmers. I write .NET apps maybe 3 hours a day at work, but I don't label myself a "programmer." I do a lot of things that have nothing to do with programming.
If you spend your whole day programming or thinking about programming, and you are going to make your entire career revolve arround programming, then you better be sure you know your stuff. Learning C would probably help build a base of knowledge that would be helpful if you're going to go very deep in programming skills.
With everthing, there are trade-offs. The more languages you learn, and the more time you spend dedicated to technology, the less time you have for learning other skills. For example, would it be better to learn C, or read books on project management? It depends on your goals. You want to be the best programmer EVAR? Learn C. Spend hours and hours writing code and dedicating yourself to the craft. You ever want to manage somebody else instead of coding all day? Use the time you would put into programming and find ways to improve your soft skills.
Should .net developers be learning C? I would say "not necessarily," but we should always be dabbling in some language outside of our professional bailiwick because every language brings with it a new way of thinking about problems. During my professional career as a .net (and before that, VB 2-6) developer, I've written small projects in Pascal, LISP, C, C++, PHP, JavaScript, Ruby, and Python and am currently dabbling in Lua and Perl.
Other than C++, I don't list any of them on my resume because I'm not looking to be a professional in any of them. Instead, I bring back interesting ideas from each of them to use in my .net-based work.
C is interesting in that it really gets you close to the OS, but that's hardly the only level you need to know about to be a good programmer.
The CLR is a virtual machine so if that's all you learn, then you only know what's happening at a virtual level.
Learning C will teach you more about the physical machine as far as memory usage goes, which as you mention is what the CLR uses underneath. Learning how the CLR works isn't going to give you as much insight into, say, garbage collection, as learning C. With C, you really appreciate what's involved in memory management.
Learning CIL on the other hand, tells you a bit more about execution in .NET than you would by learning C. Still, how IL maps to machine language will still be a mystery for the most part so knowing some of the high-level opcodes, like the ones for casting types, isn't that helpful in terms of understanding what's really going on as they're opaque for the most part. Learning C and pointers, however, will enlighten you on some of those aspects.
Is the issue learning C or MSIL, or is it more fundamental? I'd say that in general, more developers could stand to learn more about how computers, physical or virtual, work. A person can get to be a fairly competent programmer by only understanding a language and API in a box. To take the profession to the next level, I feel that developers really need to understand the whole stack. Not necessarily in detail, but in sufficient generality to help solve problems.
A lot of these skills are being talked about here can be acquired by learning more about compilers and language design. You probably need to learn C to do this (whoops, sneaky), but compiler writing is a great context to learn C in. Steve Yegge talks about this on his blog, and I largely agree with him on this point. My compiler writing course in university was one of the most eye opening courses I've ever taken, and I really wish it had been a 200 level course, instead of a 400 level one.
I posted this on another thread but it applies here to:
I believe you need a good foundation, but devote most of your time to learning what you will be using.
Learn enough assembler to add two numbers together and display the result on a console. You'll have a much better understanding of what is actually going on with the computer and it will make sense as to why we use binary/Hex. (this can be done in a day and can be done with debug from cmd.exe).
Learn enough C to have to allocate some memory and use pointers. A simple linked list is sufficient. (this can be done in a day or two).
Spend more time learning a language that you are going to use. I would let your interests steer you into which language (C#, Java, Ruby, Python, etc.).

Resources