Does C (or any other low-level language, for that matter) even have source, or is the compiler the part that "does all the work", including parsing? If so, couldn't different compilers have different C dialects? Where does the stdlib factor into this? I would really like to know how this works.
The C language is not a piece of software but a defined standard, so one wouldn't say that it's open-source, but rather that it's an open standard.
There are a gazillion different compilers for C however, and many of those are indeed open-source. The most notable example is GCC's C compiler, which is all under the GNU General Public License (GPL), an open-source license.
There are more options. Watcom is open-source, for instance. There is no shortage of open-source C compilers, but without a doubt the most widespread one, at least in the non-Windows world, is GCC.
For Windows, your best bet is probably Watcom or GCC by using Cygwin or MinGW.
C is a standard which specifies how C compilers should generate programs.
C itself doesn't have any source code, just like a musical note doesn't have any plastic.
Some C compilers, such as GCC, are open source.
C is just a language, and a standardised one at that, too. It pretty much is the compiler that "does all the work". Different compilers did have different dialects; before the the C99 ANSI standard, you had things like Borland C and other competing compilers, that implemented the C language in their own fantastic ways.
stdlib is just an agreed-upon collection of standard libraries that are required to be present in any ANSI C implementation.
To add on to the other great answers:
Regarding different dialects -- there are some additional features added to C that are compiler specific. You can provide the command line flag -std=... to gcc to specify the C standard that you want to use, each has slight variations/additions to syntax, the most common is probably c99.
Each compiler tends to implement a few different extras, for example, typeof() is not in the C standard and so compilers do not have to implement this but nevertheless it is useful and most compilers provide it. Here is a list of gcc C extensions
The stdlib is a set of functions specified in the C standard. Much like compilers, stdlib can have different implementations. The GNU implementation is open source, as is gcc, but there are other compilers and could be other implementations of stdlib that are closed source.
The Compiler would determine all the mappings from C to Assembly etc... but as far as someone owning it.....noone really owns C however the ANSI/ISO determines the standards
GCC's C compiler is written in C. So we know there are at least one C compiler written in C.
GNU's stdlib (glibc) is also written in C (stdio.h, stdlib.h). But it also has some parts written in assembly language.
A really good question. There is a way to define a language standard (not the implementation!) in a form of a "source code", in a strict and unambigous language. Unfortunately, all of the old languages, including C, are poorly defined. But it is still possible to translate that definitions into a source code form.
Another approach is to define a language via its operational semantics, often in a form of a simple (and unefficient) reference implementation.
Helgi Hrafn Gunnarsson has written the main answer but I thought it would be worth noting that you can effectively end up with dialects too.
The compilers should do the same thing with regards to whichever standard they support (which these days should be pretty much all the same version) but there are grey areas. The way in which the compilers work for 'undefined' functionality for example. If the C specification says that the behaviour is undefined for a specific case then the compiler can do pretty much what it wants.
There are also examples of functions added to the libraries (and new libraries added) by the compiler makers to support specific platform traits, create a competitive advantage or simply to make life easier. The cynical might suggest that some of these are added to help lock people into a specific compiler too.
I would say that C as a language is not open source.
As pointed out by many, you can download GNU licensed compilers and libraries for free, but if you wanted to write your own C compiler, you would need to follow the ISO C standards, and ISO charge hard cash for the specification of the C language, which at the time of posting this is $178.
So really the answer depends on what elements you are interested in being free and open source.
I'm not sure what your definitions of "open source" are.
For the standardization process, it is possible for anyone to participate, but if you want to be able to vote then you will need to pay to join your national body (for instance, ANSI for the USA, BSI for the UK, AFNOR for France etc.). As a rule most standards body memberships are paid by corporations. That said, the process is fairly open. You can access discussion papers on the standards web site.
The standards themselves are not free either. The ISO pdf store currently sells the C standard for 198 swiss francs. Draft copies of the standard can be found easily for free.
There are plenty of open source implementations of both compilers and libraries.
Related
Regarding the three criteria of agent-oriented programming paradigm:
support a logical system for defining the mental state of agents
interpreted programming language for programming agents
agentification process, for compiling agent programs into low-level executable systems (tied into second point)
Are there interpreted programming languages that are not compiled? To my understanding, the whole point of interpreting languages is to implement a new language with certain features, syntax, etc... but the underlying implementation eventually needs to compile down into something low-level so that it can actually be executed.
Is point 3 of the agent-oriented programming paradigm simply saying that it isn't sufficient to just theoretically define a language without implementing the language in something that can compile down into low-level code that can actually be run?
Yes, Jason is fully interpreted. It is a BDI agent platform. It also supports dynamic (on-the-fly) programming. You can add and organize plans in runtime and you can also save the agent mental state and load a new content with the whole system running.
Actually, there is a continuum between compiled and interpreted languages. And being compiled or interpreted is a property of the language implementation (a programming language is a specification, that is a document like R5RS; it is not a software)
I strongly recommend reading Quiennec's Lisp In Small Pieces book, which explains that in great detail (see also this). I also recommend reading Scott's Programming Language Pragmatics book.
BTW, Minsky's Society of Mind book and Pitrat's Artificial Beings: The Conscience of a Conscious Machine book should also interest you. And J.Pitrat's blog is also relevant.
Many "compiled" languages have "interpreted" parts. For example, in C, most printf implementations are "interpreting" the control format string (this is done in the printf function of the C standard library), even if the specification permits some form of "compilation". (and sometimes, GCC or Clang might be clever enough...)
Are there interpreted programming languages that are not compiled?
Read also about partial evaluation and Futamara projections
Study Common Lisp and look inside its SBCL implementation, which compiles into machine code every REPL interaction. Look also into LuaJit.
Be also aware of JIT-compiling libraries such as libgccjit, GNU lightning, asmjit, or LLVM.
How one can include his/her own programming functions to standard C (ANSI C) library? And any one who is learning or working on C language able to use those functions anywhere anytime, no need of development in general.
Example : someone developed function named "FunFun()" and assume it does fantastic work for most programmers. so how anyone can access this "FunFun" function without developing and just including standard library?
The sane way to approach it would be to develop a 3rd party library and make it available over the internet through open source, Github etc.
The GNU C dialect is one such example, which is a collection of non-standard compiler extensions used by the GCC compiler. One could join the GCC open source group and try to get the new function added there. It would still not be standard library C, but the GCC extensions are often an inspiration to the C standard and several of them (designated initializers, flexible array members, anonymous struct/union etc) have been adopted into the language itself with the C99 and C11 standards. One of the purposes for GNU C is actually to serve as an experimental playground where new languages features can be tried out live.
If you truly wish to add a new function to the actual C standard library, you would have to join the ISO working group and convince them that the function should be added to the language. Or alternatively find a member of the committee and convince them to speak in favour of the new function.
All this of course assuming you are a C programming veteran, or otherwise nobody will likely take you seriously.
Your question can't be answered because it's based on several wrong assumptions.
Things like stdlib.h are not libraries. They are header files intended to be included in your program. Including means the contents are literally pasted into your program at the point of inclusion before the actual compilation happens. They are typically used for declaring functions, types, global variables etc a library provides. The actual library is then linked against after compilation.
There's no such thing as the C library as well as there's no such thing as the C compiler. c is a language that is specified in an open standard (if you're interested, here's the last draft of the latest standard version C11). There are many actual implementations and a complete implementation consists of at least a compiler and a standard library. You can of course implement your own standard library. It's a lot of work to have it really conform to the standard (you would have to implement printf() and scanf() correctly, for example). With your own standard library, you can also include your own extensions, but this would only mean people using your standard library (instead of e.g. glibc on a GNU system) could directly use them.
For having a function available on any implementation of C, it would be necessary to have the C Standard specify it. You won't get a new function in the standard without some very good reasoning.
So if you want to make your own function available to others, do what everyone does and just implement it in your own library. Users can download it, include its headers and link against it.
It's 2012. I'm writing some code in C. Should I be still be using C89? Are there still compilers that do not support C99?
I don't mind using /* */ instead of //.
I'm not sure about C89 forbids mixing declarations and code. I'm kind of leaning towards the idea that it's actually more readable to have all the declarations in one place, and if it isn't, the function is too long.
VLAs look useful but I haven't needed them yet.
Should I stick with C89 if I don't have a compelling reason not to? Are there other things I haven't considered?
Unless you know that you cannot use a C99-compatible compiler (the Visual Studio C compiler is the most prominent candidate) there is no good reason for not using the nice things C99 gives you.
However, even if you need to support that compiler you can use some C99 features - just not all of them.
One feature of C99 that is incredibly handy is being able to do for(int i = ...) instead of having to declare your loop variable on top of the function - especially since C actually has a block scope. That's the kind of declaration where having it on top really doesn't improve the readability.
There is a reason (or many) why C89 was superseded by C99. If you know for sure that no C99 compiler is available for your particular piece of work (unlikely unless you are stuck with Visual Studio which never supported C officially anyway), then you need to stay with C89 but otherwise you should certainly put yourself in a position where you can benefit from the last 20+ years of improvement. There is nothing inherently slower about C99.
Perhaps you should even consider looking at the newest C11 standard. There has been some important fixes for dealing with Unicode that any C programmer could benefit from (other changes in the standard are absolutely minimal)...
Good code is a mixture of performance, scalability, readability, and maintainability.
In my opinion, C99 makes code easier to read and maintain. Very, very few compilers don't support C99, so I say go with it. Use the tools you have available, unless you are certain you will need to compile your project with a compiler that requires the earlier standard.
Check out these links to learn more about the advantages to C99:
http://www.kuro5hin.org/?op=displaystory;sid=2001/2/23/194544/139
http://en.wikipedia.org/wiki/C99#Design
Note that C99 also supports library functions such as snprintf, which are very useful, and have better floating point support. Also, I find macros to be extremely helpful, especially when working with math-intensive applications (like cryptographic algorithms)
I disagree with Paul R's "bottom line" comment. There are multiple cases where C89 is advantageous for portability.
Targeting embedded systems, which may or may not have compilers supporting C99:
https://groups.google.com/forum/#!topic/comp.arch.embedded/WNvhw3T_9pI%5B1-25%5D
Targeting the TinyCC compiler, as might be required in a restricted environment where installing a gigantic toolchain is either impractical or not allowed. (TCC is no longer being developed, and Bellard's last statement as to ISOC99 support was that it was "heading towards" full compliance.)
Supporting dynamic compilation via libtcc (see above).
Targeting MSVC, as others have noted.
For source-compatibility with projects that may be required by their company to use the C89 standard. This is especially relevant if you're writing an open source library, and want to maximize its application in some industry.
As cegfault noted, some of the C99 features as listed on Wikipedia can be very useful, but none I would consider indispensable if your priority is portability, or any of the above reasons apply.
It appears Microsoft hasn't budged on C99 compliance. SimonRev from Beijer Electronics commented on a related MSDN thread in November 2016:
In broad strokes, the only parts of the C99 compiler that were
implemented are those parts that they needed to keep the C++ compiler
up to date.
Microsoft has done basically nothing to the C compiler since VC6, and
they haven't made much secret that C++ is their vision of the future
of native code, not C.
In conclusion, if you want portability for embedded or restricted systems, dynamic compilation, MSVC, or compatibility with proprietary source code, I would say C89 is advantageous.
Overview:
I'm working with a hobby app. I want my program to be able to stick to "plain C".
For several reasons, I have to use a C++ compiler, and the related programming enviroment program, that supports "Plain C". And, for the same reasons, I cannot change to antoher compiler.
And, there are some C++ features that I have been coded, unintentionally.
For example, I'm not using namespaces or classes. My current programming job, is not "plain c" or "c++", and I haven't used them for some time, so, I may have forgotten which stuff is "plain c" only.
I have browsed in the internet, for "Plain C" examples. I have found that many other developers, have also post mixed "plain c" & "c++" examples, (some of them unintentionally).
I'm using some dynamically allocated structures. I have been using "malloc", but I rather use "new" instead. I thought that some new standard & compiler versions of "plain c" allowed "new", but, seems I'm wrong.
Seems that "new" is a "C++" feature, & if I really want to make a only "plain c", I should use "malloc".
The reason I want to stick to "plain C", it's because I'm working in a cross platform non-gui library / tool.
My current platform is "Windowze", my Development Enviroments, are:
(1) CodeBlocks (MinGW)
(2) Bloodshed DevCPP
(3) Borland CBuilder 6
Although, my goal is to migrate it to Linux, too , and maybe other platforms, and other (command-line) compilers.
Quick not Tested Example:
#include <stdio.h>
#include <stdlib.h>
#include <strings.h>
struct MyData_T
{
int MyInt;
char MyName[512];
char *MyCharPtr;
};
typedef
struct MyData_T *MyData_P;
MyData_P newData(char* AName)
{
MyData_P Result = null;
Result = malloc(sizeof(MyData_T));
strcpy(Result->MyName, AName, strlen(AName));
// do other stuff with fields
return Result;
} // MyData_P newData(...)
int main(...)
{
int ErrorCode = 0;
MyData_P MyDataVar = newData("John Doe");
// do more stuff with "MyDataVar";
free(MyDataVar);
return ErrorCode;
} // int main(...)
Questions
Where I can get a working "plain c only" compiler for x86 (windowze, linux) ?
Should I stick to use "malloc", "calloc", and similar ?
Should I consider to change to "C++" & "new", instead ?
Is it valid to use "new" & "delete" in a "plain c" application ?
Any other suggestion ?
Thanks.
Disclaimer
Note: I already spent several hours trying not to post the same question, in Stackoverflow, but, none of the previous answers seem clear to me.
Remember that C and C++ are actually completely different languages. They share some common syntax, but C is a procedural language and C++ is object oriented, so they are different programming paradigms.
gcc should work just fine as a C compiler. I believe MinGW uses it. It also has flags you can specify to make sure it's using the right version of C (e.g. C99).
If you want to stick with C then you simply won't be able to use new (it's not part of the C language) but there shouldn't be any problems with moving to C++ for a shared library, just so long as you put your Object Oriented hat on when you do.
I'd suggest you just stick with the language you are more comfortable with. The fact that you're using new suggests that will be C++, but it's up to you.
You can use e.g. GCC as a C compiler. To ensure it's compiling as C, use the -x c option. You can also specify a particular version of the C standard, e.g. -std=c99. To ensure you're not using any GCC-specific extensions, you can use the -pedantic flag. I'm sure other compilers have similar options.
malloc and calloc are indeed how you allocate memory in C.
That's up to you.* You say that you want to be cross-platform, but C++ is essentially just as "cross-platform" as C. However, if you're working on embedded platforms (e.g. microcontrollers or DSPs), you may not find C++ compilers for them.
No, new and delete are not supported in C.
* In my opinion, though, you should strongly consider switching to C++ for any application of non-trivial complexity. C++ has far more powerful high-level constructs than C (e.g. smart pointers, containers, templates) that simplify a lot of the tedious work in C. It takes a while to learn how to use them effectively, but in the long run, they will be worth it.
GCC has a C compiler. It's the basic one. You can call it with gcc -std=c90 to make sure it doesn't slip in any Gnu or C++ extensions.
Yes, malloc/calloc are portable and safe for use in C
Only if you have some reason to switch to C++… C is a bit more portable, but not by much, these days.
The most important tip is to save your file with a .c extension and disable compiler extensions. On both Visual C++ and gcc (and thus MinGW) this makes them go into C mode, where C++ additions will be disabled.
You can also force C mode using -std=c90 (or c99, depending on the C standard you want to use; these also disable GNU extensions) in gcc, /Tc in VC++ (and here to disable MS extensions you have to use /Za).
If using visual studio, just make the file .c (though its not strictly a C compiler, it pretends to be, and for the most, is good enough)
*nix world you can use gcc, as its pretty much the standard.
If you want to do C stick to C, if you want to do C++, use C++
so stick with malloc etc.... in C++ you'd use smart pointers.
You don't have to stick with C to create cross platform non-GUI library. You can as well develop that in C++. Since it is hobby, it is OK, but there are such libraries already available.
It seems that starting with C17 (or the technical name ISO/IEC 9899:2018) the new keyword is being supported. To make this work, if you're using Visual Studio Code, in the c_cpp_properties.json file update:
"cStandard": "c17", and
"cppStandard": "c++17"
I'm looking into learning C basics and syntax before beginning Systems Programming next month. When doing some reading, I came across the C89/99 standards. According to Wikipedia,
C99 introduced several new features,
including inline functions, several
new data types (including long long
int and a complex type to represent
complex numbers), variable-length
arrays, support for variadic macros
(macros of variable arity) and support
for one-line comments beginning with
//, as in BCPL or C++. Many of these
had already been implemented as
extensions in several C compilers.
C99 is for the most part backward
compatible with C90, but is stricter
in some ways; in particular, a
declaration that lacks a type
specifier no longer has int
implicitly assumed. A standard macro
STDC_VERSION is defined with value 199901L to indicate that C99 support
is available. GCC, Sun Studio and
other compilers now support many or
all of the new features of C99.
I borrowed a copy of K&R, 2nd Edition, and it uses the C89 standard. For a student, does the use of C89 invalidate some subjects covered in K&R, and if so, what should I look out for?
There is no reason to learn C89 or C90 over C99- it's been very literally superseded. It's easy to find C99 compilers and there's no reason whatsoever to learn an earlier standard.
This doesn't mean that your professor won't force C89 upon you. From the various questions posted here marked homework, I get the feeling that many, many C (and, unfortunately, C++) courses haven't moved on since C89.
From the perspective of a starting student, the chances are that you won't really notice the difference- there's plenty of C that's both C99 and C89/90 to be covered.
Use the C99 standard, it's newer and has more features. Particularly useful may be the bool type in <stdbool.h> and the int32_t etc. family of types; the latter prevents a lot of unportable code that relies on ints having a certain size. AFAIK, it doesn't invalidate K&R, though some example programs may be written in a slightly different style now.
Note that some compilers still don't support C99 properly. I believe that GCC still requires the use of a -std=c99 flag to enable it; many Unix/Linux systems have a c99 command that wraps GCC and enables C99.
The same goes for many university professors. I surprised mine by handing in a program that used bool in my freshman year. He'd never heard of that type in C :)
While I generally agree with the others, it is worth noting that K&R is such a good book that it might be worth learning C from it and then updating your knowledge as you read about the C99 standard.
If you are at student level you probably won't even notice the differences.
Yes, it's a bit odd that you can get a loud consensus that K&R is a great C book, and also a loud consensus that C99 is the correct/current/best version of C. The two positions are incompatible - even if K&R is the best book available to learn "C meaning C99", that just implies the rest are rubbish, or are also hopelessly outdated.
I would advise learning and using C99, but keeping an eye to C89 as you do so. If you use a compiler that has both C89 and C99 compliant modes, then you can write a few bits of C89 just to get an idea of the differences. Then if you ever need to write some code intended to be portable to places that C99 doesn't go, you'll know what to do. If you never have to write any such code, then you've wasted perhaps a day.
Writing C89 properly is actually surprisingly difficult, because getting hold of a copy of the C89 standard is difficult. So, C99 if you can, C89 if for some odd reason you have to, and have some awareness what the difference is. Maybe use K&R to cover the very basics, but get a look at some idiomatic C99 as soon as possible.
As for specific issues to be aware of when reading K&R: there's a list of major changes in the foreword of the standard (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf), although the details aren't laid out there. A lot of them are new features added to C99, so it's not that K&R is wrong, it just may not always use the best tools for a given job. Some of them are quite fiddly things where you should probably consult the standard if you need the details anyway. The rest are things removed from C89, that usually a C99 compiler will tell you about as and when you try to use them.
As a student, that doesn't influence you so much. But if possible, you should find a new C book which covers C99
The term "C89" describes two very different languages:
The language that programmers in 1989 thought the Committee was describing in places where the Standard was ambiguous, and which supported features that were common in pre-existing implementations.
The language that the Committee has since decided that it wanted to have described, which threw compatibility with existing functionality out the
window.
C99 "clarifies" ambiguous parts of the standard by saying that they meant
to have the Standard interpreted in a way that would have broken a substantial
fraction of existing code and made it impossible to perform many tasks as
efficiently as they had been performed in C before 1989.
The right language to program in, for many applications, would be the superset of pre-Standard C, C89, C99, and C11. It's important, however, that anyone programming in that language be clear that they're using that language rather than a shrinking subset which favors speed over reliability.
While I think it's beneficial to know which features are more recent and less likely to be supported by obscure (or intentionally-broken, like MSVC) compilers, there are a few C99 features that you should absolutely use:
snprintf: This is the definitive function for safe and clean string assembly in C. If your compiler is missing it, you can either replace the whole printf subsystem (probably a good idea since most implementations with missing snprintf are also full of (often intentional) bugs in printf behavior), or wrap tmpfile/fprintf/fread/fclose.
stdint.h: If you need fixed-size types (16/32/64-bit), use the standard names int16_t, uint16_t, int32_t, etc. Do not invent your own, and absolutely don't use system-specific ones like INT64 or u32. It just makes your code ugly and hard to integrate and reuse. If your compiler is missing stdint.h, just drop in your own to define the types in terms of the correct-for-your-platform types.
Specifically uint64_t, in place of int foo[2]; or struct { int lo, int hi; } foo; or other hideous legacy hacks to work with 64-bit numbers. Any sane compiler even without C99 support has its own 64-bit types you can use to define int64_t and uint64_t.