Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Member functions can be emulated in C by passing the this pointer explicitly. Virtual functions can be emulated by explicitly storing in every object a pointer to a global array of function pointers. Fine.
Now my question is, do people actually do this? I am wondering if it's worth teaching this technique, because I do not want to teach something to C freshmen that is practically never used in the real world.
(I need to fill the last day of a two-week introductory C course for people already familiar with OOP.)
Are there any relevant projects, libraries or frameworks that emulate OO in C in the manner described?
I've about twenty years experience in C. It was the first compiled language I learned and I've never needed to move on, so it's been C and only C, all the way. I write code constantly at work and at home. I have published a library of lock-free data structures. I think I'm a competent C programmer.
With regard to your question, OO consists of a number of concepts. One, for example, is instantiation, e.g a library with a new() and delete() and instances of a given entity (stack, list, etc). C supports this and it is, of course, a very functional and useful approach. I've used this approach for about fifteen years.
Many years ago I began experimenting with another OO concept, well supported in C++, inheritance. I wanted an entity which contained other entities. The problem then is exposing the API of the contained entites. You can do it, but the fact is, the C language does not naturally express such an concept and approach. It is not something I now use.
My advice is; a knife is a knife, a fork is a fork. You can use either as the other, but it doesn't work well. C does not naturally support some (important) OO concepts, such as inheritance. Don't try to make C do these things. If you want to do this, use C++.
Yes, they do.
Are there any relevant projects, libraries or frameworks that emulate OO in C in the manner described?
I wouldn't call it "emulating" just because there's no first-class language support. See GObject.
A lot of project uses the Object oriented paradigms in C codebase. For various reasons they don't use CPP directly. For system level or performance intensive projects, Other languages don't cut the deal. So its a battle between cpp and c.
Why people emulate OO in C instead of full blown CPP is topic of heated arguments. Linus torvalds once famously stated, CPP compilers are not trustworthy. He has little faith on CPP generated code.
Linux kernel is a good example of implementing OO design patterns in C. You can read about how Linux kernel did it in this lwn.net article series :
part1
part2
There is a extensive free document lying around in internet which covers a full range implementation OO design patterns in C.
ooc.pdf
You can find many other projects along the same road.
Examples:
pjsip
sofia
It may not be used in practice, but it is incredibly valuable to learn the concept of the equivalence between member functions and functions that take the object as the first parameter. Having this concept in the back of their head will help them in many problems they will encounter down the road.
Day in and day out I see people asking questions on Stack Overflow about why it doesn't work to point to pass a member function to something requiring function pointer, and things like that. They think that member functions are just some magical functions that are part of an object, and over-complicate the whole situation. If they had realized that member functions were equivalent to functions that took the object as the first parameter, then the problem they're having (that to call the method they would somehow need both the member function pointer as well as the object), as well as possible solutions (somehow pass the object in separately, or make some kind of closure that captures the object) becomes apparent. Apparently, too many people just pretend that OO is "magic" and don't understand this.
In functional programming, we often teach people how data structures and local variables and all that stuff could be written purely in terms of manipulation of functions. Not that this is practical -- it would probably be inefficient -- but this impresses upon them something about the power of functions. And it helps them to understand things in a different way. And maybe down the road if they write a compiler or something, these equivalences will come in handy.
Computer science is all about equivalences and reductions, and how to think about one problem in terms of another. We reduce SAT-3 to subset sum, not because that's actually how we would actually solve the SAT-3 problem, but because this teaches us that subset sum is NP-complete.
Every once in a while, I come across a piece of code written by someone else, where non-instance methods take a pointer to a structure as an argument, and I see a pattern and a light bulb goes off in my head, and I say, ah-ha, this can be re-factored into an instance method, because I know about this equivalence. So you see, knowing these equivalences also helps us to write better, simpler code.
Check out TI's "DSP Algorithm Standard" / xDAIS framework.
There's a generic C API that every conforming DSP algorithm implementation implements (sorry for the tautology). The need for all this "art" stems from several issues common in the DSP world:
relatively small RAMs
multiple data channels (often parallel/concurrent)
complex algorithm usage patterns
something else I forget
The standard and framework aim at making it easier for DSP engineers to use 3rd party DSP algorithms.
There's an interface to configure an algorithm instance and query its memory requirements (based on the configuration) and there are support functions that actually manage the memory.
Some memory areas, scratchpads, can be allocated temporarily and given to an algorithm instance when it's active and taken away from it when it's inactive and given to another instance, effectively shared.
There's also functionality (and APIs) to move instance memory buffers to defragment memory.
There's more, but I'd need to reread the docs to recall the details.
See IALG_*() and ALG_*() interface methods for example.
Also, there are tools to validate implementations of the generic APIs. 3rd parties can request official validation of them from TI.
Some relevant links: spru352g.pdf, spru360e.pdf.
I know several programming languages including Objective-C, Java, C#, and python, and C. However, I need to brush up my efficiency in C.
In most languages that happen to be high-level, object-based, and GUI-oriented, I create a few standard object-oriented examples to orient me to the language/framework. I usually create a "car" example where I model a car and allow the user to adjust the speed, watching the mileage increase.
However, something tells me this example is not as practical to carry over to C in a unix command-line setting. What are some good basic ideas to 'test myself' in a unix command-line based C setting?
Thanks for any input!
EDIT:
Thanks for the answers. My basic concern with the car example is that I shouldn't be attempting object-oriented in this environment, but rather do something that is more suited to the language. Thanks to Duck for the suggestion of recreating command line utilities.
Get yourself a copy of
C Programming Language (2nd Edition)
Recreate (for the umpteenth time) any of the standard command line utilities - cat, ls, touch, more, less, etc. Write a basic shell. These will exercise C and re-familiarize you with unix system calls.
try this: http://www.gowrikumar.com/c/
They provide puzzles and problems, have you spot them, and then you can check your answer. Might not be enough for what you asked for, but it might be a start?
You can write something similar with structures. Create a car structure and create functions that operate on the Car structure passed as a pointer (explicit this parameter). You can use conio.h or ncurses.h to do fancy console animation using timers. This can be fun and may provide some insight. You will also need to read keyboard input to increase/decrease speed.
Get "Advanced Programming in the Unix Environment" by W. Richard Stevens. The lectures and examples in this book are much more in tune, more realistic and practical to the type of work that is done in C. They show you the intent and purpose (and to a high the degree the philosophy) of the language.
That, IMO, will be better for brushing up your C (in addition to revisiting the oldie-but-goldie K&R book.)
I strongly recommend you go this route (in particular if you are coming from a UI, object-oriented world.) One world of caution: Do not try to do OO in C. It can be done, but that is not your immediate purpose. Learn how to code and model procedurally in C (in particular with examples tuned to its intended purpose.)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I would like to improve my C skills in order to be more competent at converting R code to C where this would be useful. What hints do people have that will help me on my way?
Background: I followed an online Intro to C course a few years ago and that plus Writing R Extensions and S Programming (Venables & Ripley) enabled me to convert bottleneck operations to C, e.g. computing the product of submatrices (did I re-invent the wheel there?). However I would like to go a bit beyond this, e.g. converting larger chunks of code, making use of linear algebra routines etc.
No doubt I have more to learn from the resources I used before, but I wondered if there were others that people recommend? Working through examples is obviously one way to learn more: Brian Ripley gave a couple of examples of moving from S prototypes to S+C in this workshop on Efficient Programming in S and a more recent Bioconductor workshop Advanced R for Bioinformatics (sorry can't post hyperlink) includes a lab on writing an R+C algorithm. More like this, or other suggestions would be appreciated.
That is a very interesting question. As it happens, I had learned C and C++ before moving to R so that may have made it "easier" for me to add C/C++ to R.
But even with that, I would be among the first to say that adding pure C to R is hellishly complicated because of the different macros and R-internals at the C level that you need to learn.
Which leads me to my favorite argument: Use an additional abstraction layer such as the Rcpp package. It hides a lot of the nasty details. And I hope that you don't need to know a lot of C++ to make use of it. One example of a package using it is the small earthmovdist package on R-Forge which uses
Rcpp wrapper classes to interface one particular metric.
Edit 1: For example, see the main function of earthmovdist here which should hopefully be easy enough to read, possibly with the (short)
Rcpp wrapper classes package manual at
one's side.
Edit 2: Three quick reasons why I consider C++ to be more appropriate and R-alike:
using Rcpp wrapper classes means you never
have to use PROTECT and UNPROTECT, which is a frequent source of error and heap
corruption if not mapped
using Rcpp and with STL container classes like vector etc means you never have to explicitly call malloc() / free() or new / deletewhich removes another frequent source of error.
Rcpp allows you to wrap everything in try / catch blocks at the C++ level and reports the exception back to R --- so no sudden seg.faults and program deaths.
That said, choice of language is a very personal decision, and many users are of course perfectly happy with the lower-level interface between C and R.
I have struggled with this issue as well.
If the issue is to improve command of C, there are plenty of book lists on the subject. They all start with K&R. I enjoyed "Expert C Programming" by P. van der Linden and "C primer" by S. Prata. Any reference on the C standard library works.
If the issue is to interface C to R, other then the aforementioned official R document, you can check out this Harvard course, and this quick start guide. I have only passed scalar and arrays to C, and honestly wouldn't know how to interface complex data structures.
If the issue is to interface C++ to R, or build C++ skills, I can't really answer as I don't use much C++. A good starting point for me was "C++ the Core Language" (O'Reilly). Very simple, primitive, but useful for people coming from C.
My primary recommendation is to look at other packages. Needless to say, all packages don't use C code, so you will need to find examples that do. You can download the source code for all packages off CRAN, and in some instances, you can also browse them on R-Forge. Some R projects are also maintained on Google Code or sites like github (for instance, ggplot2). You will find the C code in the "src" directory.
Here's an example of a src directory with some C source code on R-Forge for the "survival" package.
Here's an example with C source code on Google Code for the "rpostgresql" package.
Here's an example with C source code on github for "rqtl".
In general, think about what you're trying to accomplish, and then look at packages that do similar things.
The "C Programming Language" book is probably still the most widely used, so you may want to have that on your bookshelf. The following free book is also a useful resource: http://publications.gbdirect.co.uk/c_book/
"What is the best book to learn C?" is a perenial SO question. (The middle link is probably the best.)
As for R-specific ways of learning C, I've found it instructive to download the R source code and take a look at some the .Internal code.
EDIT: Someone else had just asked "What to read after K&R?"
If your goal is to use C to get rid of bottlenecks you'll need a good numerical library in C. There are lots, but I've found gsl (GNU Scientific Library) pretty useful.
http://www.gnu.org/software/gsl/
There is also the classic book "Numerical recipes in C" which provides an overview of important numerical techniques (though I don't recommend using their code verbatim).
In the Stack Overflow podcasts, Joel Spolsky constantly harps on Jeff Atwood about Jeff not knowing how to write code in C. His statement is that "knowing C helps you write better code." He also always uses some sort of story involving string manipulation and how knowing C would allow you to write more efficient string routines in a different language.
As someone who knows a little C, but loves to write code in perl and other high-level languages, I have never once come across a problem that I was able to solve by writing C.
I am looking for examples of real-world situations where knowing C would be useful while writing a project in a high-level/dynamic language like perl or python.
Edit: Reading some of the answers you guys have submitted have been great, but still doesn't make any sense to me in this regard:
Take the strcat example. There's a right way and a wrong way to combine strings in C. But why should I (as a high-level developer) think that I am smarter than Larry Wall? Why wouldn't the language designers write the string manipulation code the right way?
The classic example that Joel Spolsky uses is on misuse of strcat and strlen, and spotting "Shlemiel the painter" algorithms in general.
It's not that you need C to solve problems that higher-level languages can't solve, it's that knowing C well gives you a perspective on what's going on underneath all those levels of languages that allows you to write better software. Because just such a perspective helps you avoid writing code which is, unknown to you, actually O(n^2), for example.
Edit: Some clarification based on comments.
Knowing C is not a prerequisite for such knowledge, there are many ways to acquire the same knowledge.
Knowing C is also not a guarantee of these skills. You may be proficient in C and yet still write horrible, grotty, kludgy code in every other language you touch.
C is a low-level language, yet it still has modern control structures and functions so you aren't always getting caught up in the fiddly details. It's very difficult to become proficient at C without gaining a mastery of certain fundamentals (such as the details of memory management and pointers), mastery of which often pays rich dividends when working in any language.
It's always about the fundamentals.
This is true in many pursuits as well as software engineering. It is not secret incantations that make the best programmers the best, rather it is a greater mastery of the fundamentals. Experience has shown that knowledge of C tends to have a higher correlation to mastery of certain of those fundamentals, and that learning C tends to be one of the easier and more common routes to acquiring such knowledge.
It's a mistake to assume that learning C will somehow automatically give you a better understanding of low-level programming concerns. In a lot of cases even C is too high level to give you a good understanding of efficiency concerns.
A classic is i++ versus ++i. It's over-cited, so perhaps most people know the implications about performance between these two operations. But learning C wouldn't magically teach you this by itself.
I guess I understand arguments about strings. When string operations are made deceptively simple, people often use them in inefficient ways. But again, knowing that strncat exists doesn't give you a full appreciation for the efficiency concerns. A lot of C programmers probably haven't even thought about the fact that strncat has to do a strlen operation internally.
Even using C, it's important to understand what's going on behind the scenes if efficiency is a concern. People who know C tend to view things in a progression. Assembly and machine code are the building blocks of C, while C is a building block of higher level languages.
This isn't specifically true, but it's obvious that C is "closer to the metal" than many higher level languages. This has at least two effects: efficiency concerns aren't as hidden behind implicit behavior, and it's easier to screw up.
So you want a specific example of how knowing C gives you an advantage. I don't think there is one. I think what people mean when they say this is that knowing what's going on behind the scenes in whatever language you're happening to write for helps you make more intelligent decisions about how to write code. However, it's a mistake to assume that C is "what's going on behind the scenes" in Java, for instance.
It's hard to quantify exactly, but having an understanding of C will give your more insight into how higher-level language constructs are implemented, and as a consequence you'll be better able to use the constructs in an intelligent manner.
To give you a specific reason: having to write my own Garbage Collection routines has helped my write better code.
I don't think I have ever found a problem that I haven't been able to solve with a higher-level language; but started by learning C, it has instilled in me quite a number of excellent development practices. Knowing how the rudimentary parts of the flow of an application work will enable to you be able to look at your own code and get a good visual of how the data flows, and where it is stored. This then leads to a better understand of how to track down leaking memory, slow disk reads, poorly constructed caches, etc.
Keeping track of Pointers... that's another one that comes to mind.
Classic examples are things involving lower level memory management, such as the implementation of a linked list class:
struct Node
{
Data *data;
Node *next;
}
Understanding how the pointers are used to iterate the list, and what they signify in terms of the machine architecture will allow you to better understand your high level code.
Another example which Joel was referring to was the implementation of string concatenation, and the right way to create a string from a set of data.
// this is efficient
for (int i=0; i< n; i++)
{
strcat(str, data(i));
}
// this could be too, but you'd need to look at the implementation to be sure
std::string str;
for (int i=0; i<n; i++)
{
str+=data(i);
}
Knowing C helps you to write better code in C. I guess that the example of Joel Spolsky is of little use in C++ or Objective-C where specific classes for manipulating strings exist and have been crafted with performance in mind. Moreover, using C tricks in other languages may be couter productive.
Nevertheless, C knowledge is very helpful to understand general concepts in other languages and what is behind the hood in many situations.
As someone who knows a little C, but loves to write code in perl and other high-level languages, I have never once come across a problem that I was able to solve by writing C.
I am looking for examples of real-world situations where knowing C would be useful while writing a project in a high-level/dynamic language like perl or python.
It's easy to start writing high level code and then wonder we it's running slow. The truth is there are many ways to write perl or python code, and some are better (as in more efficient) than the others. If you know the low level details of how your code is executed in perl or python (both of which are written in C) you can code around several inefficiencies --like knowing which looping construct is faster, how memory is retained/released, etc.
Also, when writing a project in perl or python you sometimes hit a performance wall. The creators of the language (Guido, at least) advocate that you implement that part in C, as a language extension. To do that, well, you'll have to know C.
So, there.
For the purposes of argument, suppose you wanted to concatenate the string representations of all the integers from 1 to n (e.g. n = 5 would produce the string "12345"). Here's how one might do that naïvely in, say, Java.
String result = "";
for (int i = 1; i <= n; i++) {
result = result + Integer.toString(i);
}
If you were to rewrite that code segment (which is quite good-looking in Java) in C as literally as possible, you would get something to make most C programmers cringe in fear:
char *result = malloc(1);
*result = '\0';
for (int i = 1; i <= n; i++) {
char *intStr = malloc(11);
itoa(i, intStr, 10);
char *tempStr = malloc(/* some large size */);
strcpy(tempStr, result);
strcat(tempStr, intStr);
free(result);
free(intStr);
result = tempStr;
}
Because strings in Java are immutable, Integer.toString creates a dummy string and string concatenation creates a new string instance instead of altering the old one. That's not easy to see from just looking at the Java code. Knowing how said code translates into C is one way of learning exactly how inefficient said code is.
Do you use arrays much ? and do you come across situations where you need items to be stored in memory without knowing how many of them (i.e. based on a query from the database?) then I suppose C would teach you great things like stacks, structs and link lists which might help you. Regards, Andy
Knowing C is really not worth much. Many of us who know C deeply like to think that all that deep insight is valuable and important.
Some of us who know C can't think of a single specific feature of C that's helpful to know about.
Knowing how pointers work in C (especially with C's syntax) isn't all that helpful. In a high-level language your statements create objects and manage their interaction. Pointers and references are -- perhaps -- interesting from a hypothetical point of view. But the knowledge has no practical impact on how you use Java or Python.
The higher-level languages are the way they are. Knowing how doesn't change those languages; it doesn't change how you use them, debug or test them.
Knowing how to create or manipulate a linked list has no earthly impact on Python list class definition. None.
Knowing the difference between Linked List and Array List might help you write a Java program. But the C implementation doesn't help you choose between Linked List and Array List. The decision is independent of knowing C.
A bad algorithm is bad in every language. Knowing inner mysteries of C doesn't make a bad algorithm any less bad. Knowing C doesn't help you know the Java collections or the Python built-in types.
I can't see any value in learning C. Learning Fortran is just as valuable.
Technically, all of the deficiencies of C would force you to code around them; making you write more code -> making you more experienced in general. Lacking any portable integer bigger than 32-bits, for example, C has, in the past, made me write my own bignum library.
The lack of implicit memory, resource and error management (garbage collection, RAII, automatically-called constructors/destructors, maybe exceptions) force C users to write a lot of initialization, error-handling and cleanup code. It may just be me, but I'm never tired of writing such code. I go and read the documentation of every external function I call, return to my code and check for every return value and other failure-indicative stuff. It even makes me feel safe!
This last point is probably the biggest one to be made in favor of the argument. You can only write so many malloc()/free() pairs before you start to analyze the lifetime of every single variable you come across in every single language! C++'s automatic-storage objects don't help this disorder, either.
Writing truly portable C code often requires the programmer to be free of a lot assumptions about the host system - think sizeof(), CHAR___BITS, unsigned long, UINT_MAX. While this hasn't helped me write better code in other languages, it has helped me think about possible alternate implementations: how a tiny microprocessor could still run my C code, generating a gazillion RISC instructions for my simple one-line statement. (That is another thing; not many other languages map to and from a given assembly language so easily in my head. Then again, that may just be me.)
Of course, none of these arguments go only for C. #S.Lott has a valid point - Fortran might be an equally good alternative. But there is so much C code around! A whole personal computer system from top to bottom -applications to libraries to drivers to kernel- is available in source code in C. It would be such a waste if you could not read it.
I think it is worth knowing some low-level language, and there are pragmatic reasons to choose C:
It's low-level, close to assembler
It's widespread
Understanding the whole stack is valuable. Sometimes you need to debug something's guts. Sometimes you cannot fix a performance problem without low-level knowledge (this is often not the case, e.g., when the performance problem is purely algorithmic, but sometimes it is).
Why is C widely considered the quintessential "bottom of the stack", and not some other language(s)? I think this because C is a low-level programming language, and C won. It has been a while now, but C was not always as dominant. To take just one famous example, the proponents of Common Lisp (which had its own ways of writing low-level code) were hoping their language would be popular, too, and eventually lost.
The following are usually implemented in C:
operating systems (Unix variants, Windows, many embedded operating systems)
higher-level programming languages (many popular implementations of Java, Python, etc)
(obviously) reams of popular open source projects
I'm not a hardware person, but I gather that C has influenced CPU design heavily, too.
So if you believe in understanding the whole stack, learning C is, from a pragmatic perspective, the best choice.
As a caveat, I think it's worth learning assembler, as well. Although C is close to the metal, I didn't fully understand C until I had to do some assembler. It is occasionally helpful to understand how functions calls are actually performed, how for loops are implemented, etc. Less important, but also useful, is having to (at least once) deal with a system without virtual memory. When using C on Windows, Unix, and certain other operating systems, even humble malloc does a lot of work under the covers that is easier to appreciate, debug and/or tune if you've ever had to deal with manually locking and unlocking memory regions (not that I would recommend doing so on a regular basis!)
I see it like this , everything boils down to C in a crossplatform level, and assembly in a platform specific way. So it's like being a crosscountry Rally racer, and C is basic automotive mechanics, you can be a great driver but when you get into trouble knowing C means you can probably get yourself back in the race, if not you're stuck calling the mechanics. And assembly is what the mechanics and manufacturers know, it's a worthy investment if that's what you want to do, otherwise you can just trust the mechanics.
For specifics think about memory management, hardwar drivers, physics engines, high performance 3d graphics, TCP stacks, binary protocols, embedded software, creating high level languages like Perl
You cannot write an OS kernel in Perl; C would be a much better choice for that, because it is low-level enough to express everything the kernel should do, and portable enough to let you port your kernel to different architectures
Knowing C is not a requirement to being able to effectively use higher-level languages, but it certainly can help ones general understanding of how computers and software work - I think it's similar to an assertion that knowing some assembly language or computer architecture/hardware logic (and/or/nand gates, etc) can help a C programmer be a better programmer.
Sometimes in order to solve a problem it helps to know how things are working 'underneath' what you're doing.
I don't think this means a programmer must know C in order to be a good programmer, but I think that knowing C can be helpful to almost any programmer.
Not knowing Perl well, I am wondering if it is now possible to distribute processor load to more than one physical core with several threads created in a single program in Perl, without spawning additional processes
I don't think there can be any specific example.
What learning C does for you is give you an insight, a broadening of the mind, into how computers (and software) work. It's a very abstract thing ..
It doesn't make you write better code in python, it just makes you more of a computer scientist.
The reference that Wedge made to Joel's article mentioning Shlemiel the painter is an interesting one but has no relevance here. That algorithm is not tied to C in any particular way (although it manifests itself in null-terminated strings).
Python's strings are immutable anyway, and completely different from C's model of strings, so I don't quite see the relationship.
I suppose one concrete example is optimizing a parser or a lexer or a program that keeps writing to a string buffer all the time. If you use normal strings instead of a string buffer, you'll run across a problem when you build very large strings.
Consider that:
a = a + b
makes a copy of both a and b. It doesn't change the string that was referenced by a, it creates a new string, allocating more memory, etc.
If a becomes considerably large, and you keep adding small things to it, then Shlemiel the painter will manifest himself.
But then again, knowing this has nothing to do with knowing C, just knowing how your language implements things at the low level. (This is where having an experiece in C will help you).
In Python, say you have a function
def foo(l=[])
l.append("bar")
return l;
On some version of Python, available about a year ago, running foo() for times, you'd get a really interesting result (i.e. ["bar","bar","bar","bar]).
It seems that someone implemented the default parameters as a static variable (and without resetting it), so unexpected results happen.
Perhaps my example was contrived - a friend of mine who actually likes Python found this peculiar bug, but the fact of the matter is all of these languages are implemented in C or C++. Not knowing and not understanding concepts that are fundamental to the base language means that you won't have an in-depth understanding of languages that are built on top of that.
I find all the "why bother with C/C++/ASM question silly". If you're inclined enough to learn a language, that means that you're curious enough to get into it the first place. Why stop at just before C?
Knowing C is great because it does nothing behind your back (GC, bounds checking, etc.). It only does exactly what you tell it too. Nothing is implied. Even C++ does things you don't tell it too with RAII (of course, it is implied that the object is destructed when it goes out of scope, but you don't actually write that). C is a great way to learn what goes on 'under the hood' of the computer, without having to write assembly.
inefficient code (eg loops of string+=) are typically inefficient in any language. what difference does it make if someone explains why it is inefficient in one language or the other? knowing C, but not realizing that a method is inefficient, is no different than knowing python and not realizing the same.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Being an aspiring Apple developer, I want to get the opinions of the community if it is better to learn C first before moving into Objective-C and ultimately the Cocoa Framework?
My gut says learn C, which will give me a good foundation.
I would learn C first. I learned C (and did a lot in C) before moving to Obj-C. I have many colleagues who never were real C programmers, they started with Obj-C and learned only as much C as necessary.
Every now and then I see how they solve a problem entirely in Obj-C, sometimes resulting in a very clumsy solutions. Usually I then replace some Obj-C code with pure C code (after all you can mix them as much as you like, the content of an Obj-C method can be entirely, pure C code). Without any intention to insult any Obj-C programmer, there are solutions that are very elegant in Obj-C, these are solutions that just work (and look) a lot better thanks to objects (OOP programming can make complex programs much more lovely than functional programming; polymorphism for example is a brilliant feature)... and I really like Obj-C (much more than C++! I hate the C++ syntax and some language features are plain overkill and lead to bad development patterns IMHO); however, when I sometimes re-write Obj-C code of my colleagues (and I really only do so, if I think this is absolutely necessary), the resulting code is usually 50% smaller, needs only 25% of the memory it used before and is about 400% faster at runtime.
What I'm trying to say here: Every language has its pros and cons. C has pros and cons and so does Obj-C. However, the really great feature of Obj-C (that's why I even like it more than Java) is that you can jump to plain C at will and back again. Why this is such a great feature? Because just like Obj-C fixes many of the cons of pure C, pure C can fix some of the cons of Obj-C. If you mix them together you'll receive a very powerful team.
If you only learn Obj-C and have no idea of C or only know the very basics of it and never tried how elegantly it can solve some common problems, you actually learned only half of Obj-C. C is a fundamental part of Obj-C. The ability to use C at any time and everywhere is a fundamental feature of it.
A typical example was some code we used that had to encode data in base64, but we could not use an external library for that (no OpenSSL lib). We used a base64 encoder, entirely written using Cocoa classes. It was working okay, but when we made it encode 200 MB of binary data, it took an eternity and the memory overhead was unacceptable. I replaced it with a tiny, ultra compact base64 encoder written entirely as one C function (I copied the function body into the method body, method took NSData as input and returned NSString as output, however inside the function everything was C). The C encoder was so much more compact, it beat the pure Cocoa encoder by the factor 8 in speed and the memory overhead was also much less. Encoding/Decoding data, playing around with bits and similar low level tasks are just the strong points of C.
Another example was some UI code that drew a lot of graphs. For storing the data necessary to paint the graphs, we used NSArray's. Actually NSMutableArray's, since the graph was animated. Result: Very slow graph animation. We replaced all NSArray's with normal C arrays, objects with structs (after all graph coordinate information is nothing you must have in objects), enumerator access with simple for loops and started moving data between the arrays with memcopy instead of taking data from one array to the other one, index for index. The result: A speed up by the factor 4. The graph animated smoothly, even on older PPC systems.
The weakness of C is that every more complex program gets ugly in the long run. Keeping C applications readable, extensible and manageable demands a lot of discipline of a programmer. Many projects fail because this discipline is missing. Obj-C makes it easy for you to structure your application using classes, inheritance, protocols and so on. That said, I would not use pure C functionality across the borders of a method unless necessary. I prefer to keep all code in an Objective-C app within the method of an object; everything else defeats the purpose of an OO application. However within the method I sometimes use pure C exclusively.
You can readily enough learn C and Objective-C at the same time -- there's certainly no need to learn the minutiae of C (including pointer arithmetic and so on) before starting with Objective-C's additions to the language, and as a novice programmer getting underway with Objective-C quickly may help you to start "thinking in objects" more quickly.
In terms of available resources, Apple's documentation does typically assume familiarity with C, so starting with The Objective-C 2.0 Programming Language won't be of much benefit to you. I would invest in a copy of Programming in Objective-C by Stephen Kochan (depending on how quickly you want to get underway, you may consider waiting for the second edition):
Programming Objective-C Developers Library
Programming Objective-C 2.0 Developers Library
It assumes no prior experience, and teaches you Objective-C and as much C as you need.
If you're feeling a little ambitious, you might start with Scott Stevenson's "Learn C" Tutorial, but it does have some prerequisites ("You should already know at least one scripting or programming language, including functions, variables and loops. You'll also need to type commands into the Mac OS X Terminal.").
(Just for the record and for context: I learned both at the same time back in 1991 -- it didn't seem to do me any harm. I did, though, have a background in BASIC, Pascal, Logo, and LISP.)
I thought a lot about this issue before writing my book on Objective-C. First, I really believe that learning the C language before learning Objective-C is the wrong path. C is a procedural language containing many features that are not necessary for programming in Objective-C, especially at the novice level. In fact, resorting to some of these features goes against the grain of adhering to a good object-oriented programming methodology. It’s also not a good idea to teach all the details of a procedural language (and attacking a problem's solution with functions and structured programming techniques) before learning an object-oriented one. This can start the programmer off in the wrong direction potentially leading to developing the wrong orientation and mindset for fostering a good object-oriented programming discipline. Just because Objective-C is an extension to the C language doesn’t mean you have to learn C first!
I think that teaching Objective-C and the underlying C language as a single integrated language is the right approach. There's no reason to learn that a "for" statement is from the C language and not from its superset Objective-C language. Further, why learn in detail about things like C arrays and strings (and manipulating them) before learning about array (NSArray) and string objects (NSString), for example? Many C texts devote a lot of time to structures, and pointers to structures, and iterating through arrays with pointers. But you can start writing Objective-C programs without knowing about any of these C language features. And for a novice programmer, that's a big deal. Not only does that shorten the learning curve but also reduces the amount of material that has to be learned (and some of it selectively filtered out) to writing Objective-C programs.
I do agree that you will want to learn most, if not all, of the underlying C features, but many can be deferred until a solid grasp of defining classes and methods, working with objects and message expressions, and understanding the concepts of inheritance and polymorphism are well-understood.
I'd dive right in with Objective C - if you've already got a few languages under your belt, it's not the syntax which is the learning curve, it's Cocoa.
I think that, for the most part, learning C is a good idea no matter what arena you're going in to, at least to get the hang of the inner workings of software development before using prepackaged goods, that way if something goes wrong you have a better chance of understanding the inner workings. There's plenty of discussion about this on SO, and it's a rather subjective question, but in general you will inherently be using C within your Objective-C code, so I guess it's really up to you. I'm a ground up kind of person, but sometimes it can get in the way and I know several smart people who worked their way from the top down, I think the important part is that you get to understanding the inner workings as it will set your capabilities apart from those who don't as well as increase your capabilities.
It's a good idea to learn C before learning Objective-C, which is a strict superset of C. This means that Objective-C can support all normal C code, so the code common to C programs is bound to show up even in Objective-C code.
In addition to looking at things purely from a language point of view, you will find that Mac OS X is a complete Unix operating system. All the system level libraries are written in C.
It is probably possible to learn both at the same time, but I think you will appreciate and understand Objective-C more if you have a solid working knowledge of C first.
I'd learn Objective-C and learn as much C as you need as you go along.
The areas of C that you won't depend on much:
Pointer arithmetic and arrays. I haven't used C arrays at all.
C strings. Objective-C's strings do the job nicer and safer.
Manual memory management if you use GC in Obj-C 2.1. I highly recommend this route for development speed and performance reasons.
As you learn Objective-C and Cocoa, you cannot avoid learning bits of C. For example, rectangles are common represented by CGRect, a C struct.
If you have time, by all means learn C. As others have said here, Kochan's book (second and first editions) is excellent as a book to dip into.
There are a lot of things you can't do purely in Objective-C, so learning some basic C skills will be pretty critical. You'll at least need to understand variable declarations and the basic C library functions, or you'll be frustrated.
Honestly, so many languages are based on the C syntax that it's a good thing to be familiar with. I'd take a week or two to familiarize yourself with C regardless.
That said, I did just teach myself Objective C, and I have to be honest: I didn't find my C experience to be as useful as I would have thought. Objective C was definitely eye-opening for me.
You can jump directly into Objective-C, with the following benefits:
You'll learn "some" C in the way.
You'll learn the C parts that are relevant for you .
At least for me is easier to learn a new language when I'm interested in some specific app or sample, and I fail when I have to learn other thing that is not exactly what I'm interested on.
You can always refine your C knowledge later if you get interested in lower level programming.
Better, I don't know, even less as I am not familiar with Objective-C.
But bases of C aren't so hard to learn, it isn't a very complex language (in terms of syntax, not in terms of mastering!), so go for it, it won't be wasted time.
Personally, I think it is always a good idea to learn C, it gives a good insight of how computer works. After all, most languages and systems are still written in C. Then move on! :-)
PS.: By "move on", I didn't mean "drop it", just "learn more, learn different". Once you know C, you might never drop it: Java uses JNI to call C routines for low level stuff, Python, Lua, etc. are often extended with C code (Lua reference even just assumes some C knowledge for some functions which are just a thin wrapper to the C function behind), and so on.
Yes, learning C language before any other advanced langauges will help you to learn quiclky other langauges.
According to Wikipedia, Objective-C is a strict super-set of C. This being the case, I would suggest learning C first. Then when you learn Objective-C it will be clear what parts are added as part of Objective-C.
C gives you very little abstraction from assembly. Some C Compilers will even let you inline assembly. This can be very useful for thinking about how the computer works, which is important to know.
That being said, if you're really interested in Object-C don't let yourself get stuck writing something in C just because its "good for you". You don't need to frustrate yourself while you're trying to learn a new skill set. It is important that you have fun with what you're doing.
Do you want to be a hard-core developer? Then learn c first.
The books you need to completely master c are some of the best writings in technology. Here's what you need:
C Programming Language
The Standard C Library
Objective C is sufficiently different from C as to not merit learning C first.
From a syntax / language-family perspective one is almost better off studying SmallTalk (on which objective C is based)
From a practical perspective, focus your efforts on learning one language at a time.
Later, if you wish to learn another language, C++, Java and Python are 1) easy to learn as a bunch 2) popular and thus marketable 3) powerful.
You should have a basic knowledge of C before starting Objective_C, but there's no need master every detail of C.
I've published my notes after reading "Programming in Objective-C" in case it helps someone else.
learn objective c with programming-
Depending on many languages you already know it may be a better idea to just start learning Objective-C. The foundation in most languages are basically the same, it's the syntax that is different. Learning C first isn't really going to make much of a difference when it comes to learning Objective-C.
I learned Objective-C straight away and it worked fine for about a year now, I just had some difficulty reading C code when I downloaded project to see how they work, but now I really feel the need to learn C. You can try learning ObjC without C, but sooner or later, you will need C.
IMHO one should first learn at least some C and especially about pointers. That’s even more important if one comes from a language that doesn’t have pointers. A lot of people ask about code like
NSString *string = [[NSString alloc] init];
string = #"something";
since they don’t know about the distinction between a pointer and the object it points to.
Of course one doesn’t have to learn all of C before one can start with Objective-C, but some fundamental things are absolutely necessary.
Heck no, go straight to objective C!
I moved from ActionScript 3 to Objective C, and I already have an intern at a company!
Do what you want.
If you learn some other language prior, then you will always have confusion in writing right syntax. I do not know purpose, but Object C uses weird (not common) syntax for calling object methods. It names it as sending messages, yes, it is true accordingly pure Object Oriented concept, however most Object oriented languages name that as calling method and use more traditional syntax of calling methods. Garbage collection is also something very odd, Object C is based on old school reference count. So you will have difficulties to accept it if you switch from other language.
I am writing a book Object C quick migration guide for C/C++ programmers hoping to help people to pickup all differences quicker.