I was playing around with an online PDP11 emulator (link) and was looking at the programming section of its FAQ.
It says this about programming in C on the emulator:
You need to write pre-K&R C which is quite a bit different from modern C
I believe this is referring to the version of C in use before the publication of The C Programming Language. I have tried to understand this version by reading the sparse C files I can find in the emulator's filesystem, but even simple things like declaring argc and argv have eluded me. I also can't find anything about it online.
Is there any documentation, written at the time or after the fact, on "pre-K&R" C?
For this sort of question, my go-to source is the archived web page of the late Dennis Ritchie:
https://www.bell-labs.com/usr/dmr/www/
From there it's one click to this early reference manual for C, written by Ritchie himself:
https://www.bell-labs.com/usr/dmr/www/cman.pdf
This is indeed "pre-K&R C", featuring anachronisms such as =+ instead of +=.
This is the same reference manual that appeared, in updated form, as Appendix A in the K&R book.
On the same page are links to several other versions of that reference manual, as well as notes about and even the source code of two early versions of Ritchie's compiler. These are pretty fun to look at, although as the page notes, "You won't be able to compile them with today's compilers".
There's a whole Stack Exchange site dedicated to questions like these: https://retrocomputing.stackexchange.com/.
Steve Summit answered about where to get the documentation. So this post is intended to summarize noticeable differences from modern C
Types for function arguments were quite different. They were not specified within the parenthesis
void foo(a, b)
int a;
float b;
{
// Function body
}
No row comments, like //. Only /* */
No logical or and and. The operators && and || did not exist. The bitwise operators & and | were used instead.
=- meant the same as -=, which lead to ambiguity with for instance x=-1.
There was no void pointers. char pointers were used instead.
One could not declare variables in the for header, so one had to write:
int i=0;
for(i=0; i<N; ++i)
= were only used for assignments and not initialization. Those were done like this: int x 3;
Implicit int. This is still valid in modern C. The difference is that no (sane) programmer is using it anymore. This:
foo 3;
is actually equivalent to
int foo = 3;
There was no const qualifier
Related
I was looking at one of the first compilers of c, written by dmr himself. It was written in the early 1970's, so obviously the syntax is very different. In this file, what does ossiz mean?
ossiz 250;
By the way, is there any helpful article or manual on this sort of thing (older c syntax)?
Just like in B, it's a global variable definition with initialization. In modern C it would be:
int ossiz = 250;
Also: https://github.com/mortdeus/legacy-cc/blob/2b4aaa34d3229616c65114bb1c9d6efdc2a6898e/last1120c/c10.c#L462
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
C is an old language. For instance, did you know that there would be no problem to use the . operator instead of the -> operator with pointers and that the -> was maintained for legacy reasons? see here.
Studying several C code, one can notice that most of the time structs are passed around using pointers instead of by value. Arrays are also passed by pointer.
Would be possible to change the syntax to pass structures by default as pointers? For instance, consider the following example:
struct Point {
float x;
float y;
};
struct Line {
Point! p0; // ! means that we want a value, not a pointer
Point! p1;
};
struct Line2 {
Point p0; // would be the same as Point*
Point p1;
}
void foobar(Line line) { // would mean Line* line
...
}
void foobaz(Line! line) { // would mean Line line
...
}
Let's not consider readability, for instance, some argue that -> makes explicit that is a pointer. That's not the point here.
Here is the question: what possible problems or corner cases could arise from using such syntax? For example: 'using such syntax, it would be impossible to do this or that'
Java actually works similar to this proposal: everything is a pointer under the hood. The difference is that Java has garbage collector and objects are always allocated in the heap (the object itself, not the 'pointer') while in C structs can be allocated in the stack (although most of the time I see allocated in the heap).
If one wonders why I am asking this, it is because I am creating a programming language for low level development and I want to optimize the syntax for the common case. Although I like the C language syntax itself, I've got frustated by having to use headers (update function signature in two places, for instance), lack of support for function overload (I don't like to write int_list_add, float_list_add) and similar.
Thank you all who already replied, but I am not trying to modify C. This is a research for my language that I don't have pretension to be used by others.
My language is for low-level development trying as much as possible to use high level stuff like OO stuff, high level functions, closures and so on. It won't be garbaged collector, using a memory model similar to C (similar, not equal).
The language will have pointers just as C: you can get the address of some variable, deference a pointer.
If it is so important to ask the question that I asked, here go the syntax of my language which, by the way, it is totally different from C, a proof that I am not trying to create a C clone.
record Point:
x : float
y : float
record Line:
p0 : Point!
p1 : Point!
record Line2:
p0 : Point
p1 : Point
function foobar : void
#line : Line
function foobaz : void
#line : Line!
As I've said, I want to optime the syntax of my language for the common case. So if I am going to pass by pointer almost all the time, I don't want to be using a * again, and again and again. The only problem (and this has nothing to do with my programming skills, but is philosofical issue) is that I can't see problems that could arise from not specifying explicitly that is a pointer.
You can change the syntax and the semantics (actually; much more important than just syntax) of C, but you should not call that language C anymore.
C is a standard language; read first the wikipage on C11 and the n1570 draft (or buy, from ISO or your national standard body, the paper equivalent)
I am not a lawyer, and I don't know if and how ISO is legally protecting the standard. (However, be sure that if you call your stuff C, people would surely laugth at you).
For example: 'using such syntax, it would be impossible to do this or that'
Nothing would be impossible (both your language and the C language are Turing-complete). Some things might become more inconvenient.
BTW, you don't describe precisely enough what you are thinking about (examples are not describing programming languages, and you don't explain semantics aspects). So we can't comment on it.
Read several programming languages specifications, including not only n1570 but also n3337 (close to C++11 specification) and of course R5RS (a short, well written, specification of some Scheme dialect)
At last, be sure to implement your programming language (and name it differently than C). I recommend to implement it as some free software (e.g. on github) and to bootstrap your compiler.
So try to specify your language as well as C, or C++, or Scheme are specified. Then implement. You could need decades of work, and you certainly need several years.
Read at first the Dragon Book, SICP & Lisp In Small Pieces. Read also about Denotational Semantics
Consider implementing your language with the help of libraries like GCCJIT or LLVM, or by compiling it to C. Then you'll have a "proof-of-concept" implementation. Have fun doing it.
By implementing your language, you'll discover its semantics traps. (I love doing that).
Look also into other languages, like Ada, Go, Ocaml, Rust, Modula, Cyclone, Scheme, C++ (perhaps you are reinventing its references, or its smart pointers), Common Lisp ....
Read Scott's Programming Languages Pragmatics book.
So I'm completely new to programming. I currently study computer science and have just read the first 200 pages of my programming book, but there's one thing I cannot seem to see the difference between and which havn't been clearly specified in the book and that's reserved words vs. standard identifiers - how can I see from code if it's one or the other.
I know the reserved words are some that cannot be changed, while the standard indentifiers can (though not recommended according to my book). The problem is while my book says reserved words are always in pure lowercase like,
(int, void, double, return)
it kinda seems to be the very same for standard indentifier like,
(printf, scanf)
so how do I know when it is what, or do I have to learn all the reserved words from the ANSI C, which is the current language we are trying to learn, (or whatever future language I might work with) to know when it is when?
First off, you'll have to learn the rules for each language you learn as it is one of the areas that varies between languages. There's no universal rule about what's what.
Second, in C, you need to know the list of keywords; that seems to be what you're referring to as 'reserved words'. Those are important; they're immutable; they can't be abused because the compiler won't let you. You can't use int as a variable name; it is always a type.
Third, the C preprocessor can be abused to hijack anything; if you compile with #define double int in effect, you get what you deserve, but there's nothing much to stop you doing that.
Fourth, the only predefined variable name is __func__, the name of the current function.
Fifth, names such as printf() are defined by the standard library, but the standard library has to be implemented by someone using a C compiler; ask the maintainers of the GNU C library. For a discussion of many of the ideas behind the treaty between the standard and the compiler writers, and between the compiler writers and the programmers using a compiler, see the excellent book The Standard C Library by P J Plauger from 1992. Yes, it is old and the modern standard C library is somewhat bigger than the one from C90, but the background information is still valid and very helpful.
Reserved words are part of the language's syntax. C without int is not C, but something else. They are built into the language and are not and cannot be defined anywhere in terms of this particular language.
For example, if is a reserved keyword. You can't redefine it and even if you could, how would you do this in terms of the C language? You could do that in assembly, though.
The standard library functions you're talking about are ordinary functions that have been included into the standard library, nothing more. They are defined in terms of the language's syntax. Also, you can redefine these functions, although it's not advised to do so as this may lead to all sorts of bugs and unexpected behavior. Yet it's perfectly valid to write:
int puts(const char *msg) {
printf("This has been monkey-patched!\n");
return -1;
}
You'd get a warning that'd complain about the redefinition of a standard library function, but this code is valid anyway.
Now, imagine reimplementing return:
unknown_type return(unknown_type stuff) {
// what to do here???
}
A few years ago, before standardization of C, it was allowed to use struct selectors on addresses. For example, the following code was allowed and frequently used.
#define PTR 0xAA000
struct { int integ; };
func() {
int i;
i = PTR->integ; /* here, c is set to the first int at PTR */
return c;
}
Maybe it wasn't very neat, but I like it. In my opinion, the power and the versatility of this language relies also on its lack of constraints. Nowadays, compilers just dump an error. I'd like to know if it is possible to remove this restraint in the GNU C compiler.
PS: similar code was used on the UNIX kernel by the inventors of C. (in V6, some dummy structures have been declared in param.h)
'A few years ago' is actually a very, very long time ago. AFAICR, the C in 7th Edition UNIX™ (1979, a decade before the C89 standard was defined) didn't support that notation any more (but see below).
The code shown in the question only worked when all structure members of all structures shared the same name space. That meant that structure.integ or pointer->integ always referred to an int at the start of a structure because there was only one possible structure member integ across the entire program.
Note that in 'modern' C (1978 onwards), you cannot reference the structure type; there's neither a structure tag nor a typedef for it — the type is useless. The original code also references an undefined variable c.
To make it work, you'd need something like:
#define PTR 0xAA000
struct integ { int integ; };
int func(void)
{
struct integ *ptr = (struct integ *)PTR;
return ptr->integ;
}
C for 7th Edition UNIX
I suggested that the C with 7th Edition UNIX supported separate namespaces for separate structure types. However, the C Reference Manual published with the UNIX Programmer's Manual Vol 2 mentions in §8.5 Structures:
The names of structure members and structure tags may be the same as ordinary variables, since a distinction can
be made by context. However, names of tags and members must be distinct. The same member name can appear in
different structures only if the two members are of the same type and if their origin with respect to their structure is
the same; thus separate structures can share a common initial segment.
However, that same manual also mentions the notations (see also What does =+ mean in C):
§7.14.2 lvalue =+ expression
§7.14.3 lvalue =- expression
§7.14.4 lvalue =* expression
§7.14.5 lvalue =/ expression
§7.14.6 lvalue =% expression
§7.14.7 lvalue =>> expression
§7.14.8 lvalue =<< expression
§7.14.9 lvalue =& expression
§7.14.10 lvalue =^ expression
§7.14.11 lvalue = | expression
The behavior of an expression of the form ‘‘E1 =op E2’’ may be inferred by taking it as equivalent to
‘‘E1 = E1 op E2’’; however, E1 is evaluated only once. Moreover, expressions like ‘‘i =+ p’’ in which a pointer is
added to an integer, are forbidden.
AFAICR, that was not supported in the first C compilers I used (1983 — I'm ancient, but not quite that ancient); only the modern += notations were allowed. In other words, I don't think the C described by that reference manual was fully current when the product was released. (I've not checked my 1st Edition of K&R — does anyone have one on hand to check?) You can find the UNIX 7th Edition manuals online at http://cm.bell-labs.com/7thEdMan/.
By giving the structure a type name and adjusting your macro slightly you can achieve the same effect in your code:
typedef struct { int integ; } PTR_t;
#define PTR ((PTR_t*)0xAA000)
I'd like to know if it is possible to remove this restraint in the GNU C compiler.
I'm reasonably sure the answer is no -- that is, unless you rewrite gcc to support the older version of the language.
The gcc manual documents the -traditional command-line option:
'-traditional' '-traditional-cpp'
Formerly, these options caused GCC to attempt to emulate a
pre-standard C compiler. They are now only supported with the
`-E' switch. The preprocessor continues to support a pre-standard
mode. See the GNU CPP manual for details.
This implies that modern gcc (the quote is from the 4.8.0 manual) no longer supports pre-ANSI C.
The particular feature you're referring to isn't just pre-ANSI, it's very pre-ANSI. The ANSI standard was published in 1989. The first edition of K&R was published in 1978, and as I recall the language it described didn't support the feature you're looking for. The initial release of gcc was in 1987, so it's very likely that no version of gcc has ever supported that feature.
Furthermore, enabling such a feature would break existing code which may depend on the ability to use the same member name in different structures. (Traces of the old rules survive in the standard C library, where for example the members of type struct tm all have names starting with tm_; in modern C that would not be necessary.)
You might be able to find sources for an ancient C compiler that works the way you want. The late Dennis Ritchie's home page would be a good starting point for that. It's not at all obvious that you'd be able to get such a compiler working on any modern system without a great deal of work. And the result would be a compiler that doesn't support a number of newer features of C that you might find useful, such as the long, signed, and unsigned keywords, the ability to pass structures by value, function prototypes, and diagnostics for attempts to mix pointers and integers.
C is better now than it was then. There are a few dangerous things that are slightly more difficult than they were, but I'm not aware that any actual expressive power has been lost.
Here is the question, How did C (K&R C) look like? The question is about the first ten or twenty years of C's life?
I know, well I heard them from a prof in my uni, that C didn't have the standard libraries that we get with ANSI C today. They used to write IO routines in wrapped assembly! The second thing is that K&R book, is one the best books ever for a programmer to read, This is what my prof told us :)
I would like to know more about good ol' C. For example, what major difference you know about it compared to ANSI C, or how did C change programmers mind about programming?
Just for record, I am asking this question after reading mainly these two papers:
Evolving a language in and for the real world: C++ 1991-2006
A History of C++: 1979-1991
They are about C++, I know! thats why I wanna know more about C, because these two papers are about how C++ was born out of C. I am now asking about how it looked before that. Thanks Lazarus for pointing out to 1st edition of K&R, but I am still keen to know more about C from SO gurus ;)
Well, for a start, there was none of that function prototype rubbish. main() was declared thus:
/* int */ main(c,v)
int c;
char *v[];
{
/* Do something here. */
}
And there was none of that fancy double-slash comments either. Nor enumerations. Real men used #define.
Aah, brings a tear to my eyes, remembering the good old days :-)
Have a look at the 'home page' for the K&R book at Bell Labs, in particular the heading "The history of the language is traced in ``The Development of the C Language'', from HOPL II, 1993"
Speaking from personal experience, my first two C compilers/dev environments were DeSmet C (16-bit MS-DOS command line) and Lattice C (also 16-bit MS-DOS command line). DeSmet C came with its own text editor (see.exe) and libraries -- non-standard functions like scr_rowcol() positioned the cursor. Even then, however, there were certain functions that were standard, such as printf(), fopen() fread(), fwrite() and fclose().
One of the interesting peculiarities of the time was that you had a choice between four basic memory models -- S, P, D and L. Other variations came and went over the years, but these were the most significant. S was the "small" model, 16-bit addressing for both code and data, limiting you to 64K for each. L used 24-bit addressing, which was a 16-bit segment register and a 16-bit offset register to compute addresses, limiting you to 1024K of address space. Of course, in a 16-bit DOS world, you were confined to a physical limitation of 640K. P and D were compromises between the two modes, where P allowed for 24-bit (640K) code and 64K data, and D allowed for 64K code and 640K data addressing.
Wikipedia has some information on this topic.
Here is one example of the code that changed with ANSI C for the better:
double GetSomeInfo(x)
int x;
{
return (double)x / 2.0;
}
int PerformFabulousTrick(x, y, z)
int x, int y;
double z;
{
/* here we go */
z = GetSomeInfo(x, y); /* argument matching? what's that? */
return (int)z;
}
I first started working with C on VAX/VMS in 1986. Here are the differences I remember:
No prototypes -- function definitions and delcarations were written as
int main() /* no void to specify empty parameter list */
{
void foo(); /* no parameter list in declaration */
...
}
...
void foo(x,y)
int x;
double y;
{
...
}
No generic (void) pointer type; all of the *alloc() functions returned char * instead (which is part of why some people still cast the return value of malloc(); with pre-ANSI compilers, you had to);
Variadic functions were handled differently; there was no requirement for any fixed arguments, and the header file was named differently (varargs.h instead of stdarg.h);
A lot of stuff has been added to math.h over the years, especially in the C99 standard; '80s-vintage C was not the greatest tool for numerical work;
The libraries weren't standardized; almost all implementations had a version of stdio, math, string, ctype, etc., but the contents were not necessarily the same across implementations.
Look at the code for the Version 6 Unix kernel - that was what C looked like!
See Lion's Commentary on Unix 6th Edition (Amazon).
Also, it would be easier if you told us your age - your profile says you're 22, so you're asking about code prior to 1987.
Also consider: The Unix Programming Environment from 1984.
While for obvious reasons the core language came before the library, if you get hold of a first edition copy of K & R published in 1978 you will find the library very familiar. Also C was originally used for Unix development, and the library hooked into the I/O services of the OS. So I think your prof's assertion is probably apocryphal.
The most obvious difference is the way functions were defined:
VOID* copy( dest, src, len )
VOID* dest ;
VOID* src ;
int len ;
{
...
}
instead of:
void* copy( void* dest, void* src, int len )
{
...
}
for example. Note the use of VOID; K&R C did not have a void type, and typically VOID was a macro defined as int*. Needless to say, to allow this to work, the type checking in early compilers was permissive. From a practical point of view, the ability of C to validate code was poor (largely through lack of function prototypes and weak type checking), and hence the popularity of tools such a lint.
In 1978 the definition of the language was the K&R book. In 1989 it was standardised by ANSI and later by ISO, the 2nd edition is no longer regarded as the language definition, and was based on ANSI C. It is still the best book on C IMO, and a good programming book in general.
There is a brief description on Wikipedia which may help. Your best bet is to get a first edition copy of K&R, however, I would not use it to learn C, get a 2nd ed. for that.
I started using C in the early 1980's. The key difference I've seen between now and then was that early C did not have function prototypes, as someone noted. The earliest C I ever used had pretty much the same standard library as today. If there was a time when C didn't have printf or fwrite, that was before even my time! I learned C from the original K&R book. It is indeed a classic, and proof that technically sophisticated people can also be excellent writers. I'm sure you can find it on Amazon.
You might glance at the obfuscated C contest entries from the time period you are looking for.
16 bit integers were quite common in the ol' days.