I am new to coding using MISRA C guidelines.
The following are two rules in MISRA C 2004:
Rule 16.1 (required): Functions shall not be defined with a variable number of arguments.
Rule 20.9 (required): The input/output library <stdio.h> shall not be used in production code.
This clearly means that I can't use printf in production code for it to be MISRA C compliant, because printf is a part of <stdio.h> and allows a variable number of arguments. So I set out on a quest to find out how I can write my own printf statement. So far I am unable to find any solution for this predicament. Any help from fellow developers would be appreciated.
so far I am unable to find any solution for this predicament
You have to use functions that print one (countable) things at a time. An example interface you might want to implement might look like the following:
print_string("Hello");
print_int(5);
print_char('\n');
so I set out on a quest to find out how I can write my own printf statement
Most MISRA-C systems are embedded systems where printf is just some bloated wrapper around an UART library. The usual solution is to develop your own logging/messaging tool instead. Not necessarily UART-based, might as well some other serial bus, or just 8 parallel data or some LCD/7-seg... all depending on what you need to display and if you intend for this to be part of the production code or not.
So how to do this is highly project-specific and it's typically more of a system design and electronics problem than a programming one.
EDIT
Since you seem to be making some sort of general-purpose library, one solution is to simply provide an API that returns strings to the caller, then let the caller worry about how to present them. That makes your lib MISRA-C compliant, while allowing the caller to print strings in whatever application-specific way they have available. For example:
void lib_getmsg (char* msg, size_t bufsize);
Where "lib" is some prefix for your library. Leave string allocation to the caller. Alternatively, the old-fashioned way:
lib_result_t lib_dosomething (void);
// Returns LIB_OK if went OK, returns LIB_ERR in case of errors.
// To get more information, call lib_get_lastmsg.
const char* lib_get_lastmsg (void);
This returns a pointer to an internal static string allocated by your library. The downside of this is that it won't work well in multi-process environments.
You need to understand the rationale for the MISRA C guidelines, understand the context they are used in, and the circumstances of your own code.
You also need to understand that the MISRA Guidelines are not to be blindly followed with a tick-box mentality... you then need to appreciate that those nice folk at MISRA provide several chapters of useful material before the actual guidelines. Part of that is the Deviation procedure.
If you can justify why you feel you need to violate a guidelines, then use the deviation procedure that is specified. This requires you understand the nature of the violation, and what you are going to do to ensure the integrity of your application.
If you genuinely need to use printf() and you can justify that, use it with a deviation
On Linux, running on a modern x86_64 processor:
int main()
{
char *s = "Hello, World!\n";
long l = 14;
long fd = 1;
long syscall = 1;
long ret = 0;
__asm__("syscall"
: "=a"(ret)
: "a"(syscall),
"D"(fd),
"S"(s),
"d"(l));
return 0;
}
Output:
Hello, World!
Related
By reading What does -D_XOPEN_SOURCE do/mean? , I understand that how to use feature test macros.
But I still don't understand why do we need it, I mean, can we just enable all features available? Then the doc writes like this: this function only available in Mac/BSD, that function only available in Linux, if you use it, then your program can only be running on that system.
So why do we need a feature test macro in the first place?
why do we need it, I mean, can we just enable all features available?
Imagine some company has written perfectly fine super portable code roughly like the following:
#include <stdlib.h>
struct someone_s { char name[20]; };
/// #brief grants Plant To someone
int grantpt(int plant_no, struct someone_s someone) {
// some super plant granting algorithm here
return 0;
}
int main() {
// some program here
struct someone_s kamil = { "Kamil" };
return grantpt(20, kamil);
}
That program is completely fine and all is working fine, and that program is very C compatible, thus should be portable to anywhere. Now imagine for a moment that _XOPEN_SOURCE does not exist! A customer receives sources of that program and tries to compile and run it on his bleeding edge Unix computer with certified C compiler on a certified POSIX system, and he receives an error that that company has to fix, and in turn has to pay for:
/tmp/1.c:7:9: error: conflicting types for ‘grantpt’; have ‘int(struct someone_s, int)’
7 | int grantpt(struct someone_s someone, int plant_no) {
| ^~~~~~~
In file included from /tmp/1.c:2:
/usr/include/stdlib.h:977:12: note: previous declaration of ‘grantpt’ with type ‘int(int)’
977 | extern int grantpt (int __fd) __THROW;
| ^~~~~~~
Looks like a completely random name picked for a function is already taken in POSIX - grantpt().
When introducing new symbols that are not in reserved space, standards like POSIX can't just "add them" and expect the world not to protest - conflicting definitions can and will and do break valid programs. To battle the issue feature_test_macros were introduced. When a program does #define _XOPEN_SOURCE 500 it means that it is prepared for the POSIX standard and there are no conflicts between the code and symbols introduced by POSIX in that version.
Feature test macros are not just "my program wants to use these functions", it is most importantly "my program has no conflicts with these functions", which is way more important, so that existing programs continue to run.
The theoretical reason why we have feature selection macros in C, is to get the C library out of your way. Suppose, hypothetically, you want to use the name getline for a function in your program. The C standard says you can do that. But some operating systems provide a C library function called getline, as an extension. Its declaration will probably clash with your definition. With feature selection macros, you can, in principle, tell those OSes' stdio.hes not to declare their getline so you can use yours.
In practice these macros are too coarse grained to be useful, and the only ones that get used are the ones that mean "give me everything you got", and people do exactly what you speculate they could do, in the documentation.
Newer programming languages (Ada, C++, Modula-2, etc.) have a concept of "modules" (sometimes also called "namespaces") which allow the programmer to give an exact list of what they want from the runtime library; this works much better.
Why do we need feature test macros?
You use feature test macros to determine if the implementation supports certain features or if you need to select an alternative way to implement whatever it is you're implementing.
One example is the set of *_s functions, like strcpy_s:
errno_t strcpy_s(char *restrict dest, rsize_t destsz, const char *restrict src);
// put this first to signal that you actually want the LIB_EXT1 functions
#define __STDC_WANT_LIB_EXT1__ 1
#include <string.h>
Then in your code:
#ifdef __STDC_LIB_EXT1__
errno_t err = strcpy_s(...); // The implementation supports it so you can use it
#else
// Not supported, use an alternative, like your own implementation
// or let it fail to compile.
#fi
can we just enable all features available?
When it comes to why you need to tell the implementation that you actually want a certain set of features (instead of it just including them all automatically) I have no better answer than that it could possibly make the programs slower to compile and could possibly also make it produce bigger executables than necessary.
Similarly, the implementation does not link with every library it has available, but only the most basic ones. You have to tell it what you need.
In theory, you could create header file which defines all the possible macros that you've found that will enable a certain set of features.
#define _XOPEN_SOURCE 700
#define __STDC_LIB_EXT1__ 1
...
But as you see with _XOPEN_SOURCE, there are different releases, and you can't enable them all at the same time, you need to select one.
So I'm completely new to programming. I currently study computer science and have just read the first 200 pages of my programming book, but there's one thing I cannot seem to see the difference between and which havn't been clearly specified in the book and that's reserved words vs. standard identifiers - how can I see from code if it's one or the other.
I know the reserved words are some that cannot be changed, while the standard indentifiers can (though not recommended according to my book). The problem is while my book says reserved words are always in pure lowercase like,
(int, void, double, return)
it kinda seems to be the very same for standard indentifier like,
(printf, scanf)
so how do I know when it is what, or do I have to learn all the reserved words from the ANSI C, which is the current language we are trying to learn, (or whatever future language I might work with) to know when it is when?
First off, you'll have to learn the rules for each language you learn as it is one of the areas that varies between languages. There's no universal rule about what's what.
Second, in C, you need to know the list of keywords; that seems to be what you're referring to as 'reserved words'. Those are important; they're immutable; they can't be abused because the compiler won't let you. You can't use int as a variable name; it is always a type.
Third, the C preprocessor can be abused to hijack anything; if you compile with #define double int in effect, you get what you deserve, but there's nothing much to stop you doing that.
Fourth, the only predefined variable name is __func__, the name of the current function.
Fifth, names such as printf() are defined by the standard library, but the standard library has to be implemented by someone using a C compiler; ask the maintainers of the GNU C library. For a discussion of many of the ideas behind the treaty between the standard and the compiler writers, and between the compiler writers and the programmers using a compiler, see the excellent book The Standard C Library by P J Plauger from 1992. Yes, it is old and the modern standard C library is somewhat bigger than the one from C90, but the background information is still valid and very helpful.
Reserved words are part of the language's syntax. C without int is not C, but something else. They are built into the language and are not and cannot be defined anywhere in terms of this particular language.
For example, if is a reserved keyword. You can't redefine it and even if you could, how would you do this in terms of the C language? You could do that in assembly, though.
The standard library functions you're talking about are ordinary functions that have been included into the standard library, nothing more. They are defined in terms of the language's syntax. Also, you can redefine these functions, although it's not advised to do so as this may lead to all sorts of bugs and unexpected behavior. Yet it's perfectly valid to write:
int puts(const char *msg) {
printf("This has been monkey-patched!\n");
return -1;
}
You'd get a warning that'd complain about the redefinition of a standard library function, but this code is valid anyway.
Now, imagine reimplementing return:
unknown_type return(unknown_type stuff) {
// what to do here???
}
everyone. I actually have two questions, somewhat related.
Question #1: Why is gcc letting me declare variables after action statements? I thought the C89 standard did not allow this. (GCC Version: 4.4.3) It even happens when I explicitly use --std=c89 on the compile line. I know that most compilers implement things that are non-standard, i.e. C compilers allowing // comments, when the standard does not specify that. I'd like to learn just the standard, so that if I ever need to use just the standard, I don't snag on things like this.
Question #2: How do you cope without objects in C? I program as a hobby, and I have not yet used a language that does not have Objects (a.k.a. OO concepts?) -- I already know some C++, and I'd like to learn how to use C on it's own. Supposedly, one way is to make a POD struct and make functions similar to StructName_constructor(), StructName_doSomething(), etc. and pass the struct instance to each function - is this the 'proper' way, or am I totally off?
EDIT: Due to some minor confusion, I am defining what my second question is more clearly: I am not asking How do I use Objects in C? I am asking How do you manage without objects in C?, a.k.a. how do you accomplish things without objects, where you'd normally use objects?
In advance, thanks a lot. I've never used a language without OOP! :)
EDIT: As per request, here is an example of the variable declaration issue:
/* includes, or whatever */
int main(int argc, char *argv[]) {
int myInt = 5;
printf("myInt is %d\n", myInt);
int test = 4; /* This does not result in a compile error */
printf("Test is %d\n", test);
return 0;
}
c89 doesn't allow this, but c99 does. Although it's taken a long time to catch on, some compilers (including gcc) are finally starting to implement c99 features.
IMO, if you want to use OOP, you should probably stick to C++ or try out Objective C. Trying to reinvent OOP built on top of C again just doesn't make much sense.
If you insist on doing it anyway, yes, you can pass a pointer to a struct as an imitation of this -- but it's still not a good idea.
It does often make sense to pass (pointers to) structs around when you need to operate on a data structure. I would not, however, advise working very hard at grouping functions together and having them all take a pointer to a struct as their first parameter, just because that's how other languages happen to implement things.
If you happen to have a number of functions that all operate on/with a particular struct, and it really makes sense for them to all receive a pointer to that struct as their first parameter, that's great -- but don't feel obliged to force it just because C++ happens to do things that way.
Edit: As far as how you manage without objects: well, at least when I'm writing C, I tend to operate on individual characters more often. For what it's worth, in C++ I typically end up with a few relatively long lines of code; in C, I tend toward a lot of short lines instead.
There is more separation between the code and data, but to some extent they're still coupled anyway -- a binary tree (for example) still needs code to insert nodes, delete nodes, walk the tree, etc. Likewise, the code for those operations needs to know about the layout of the structure, and the names given to the pointers and such.
Personally, I tend more toward using a common naming convention in my C code, so (for a few examples) the pointers to subtrees in a binary tree are always just named left and right. If I use a linked list (rare) the pointer to the next node is always named next (and if it's doubly-linked, the other is prev). This helps a lot with being able to write code without having to spend a lot of time looking up a structure definition to figure out what name I used for something this time.
#Question #1: I don't know why there is no error, but you are right, variables have to be declared at the beginning of a block. Good thing is you can declare blocks anywhere you like :). E.g:
{
int some_local_var;
}
#Question #2: actually programming C without inheritance is sometimes quite annoying. but there are possibilities to have OOP to some degree. For example, look at the GTK source code and you will find some examples.
You are right, functions like the ones you have shown are common, but the constructor is commonly devided into an allocation function and an initialization function. E.G:
someStruct* someStruct_alloc() { return (someStruct*)malloc(sizeof(someStruct)); }
void someStruct_init(someStruct* this, int arg1, arg2) {...}
In some libraries, I have even seen some sort of polymorphism, where function pointers are stored within the struct (which have to be set in the initializing function, of course). This results in a C++ like API:
someStruct* str = someStruct_alloc();
someStruct_init(str);
str->someFunc(10, 20, 30);
Regarding OOP in C, have you looked at some of the topics on SO? For instance, Can you write object oriented code in C?.
I can't put my finger on an example, but I think they enforce an OO like discipline in Linux kernel programming as well.
In terms of learning how C works, as opposed to OO in C++, you might find it easier to take a short course in some other language that doesn't have an OO derivative -- say, Modula-2 (one of my favorites) or even BASIC (if you can still find a real BASIC implementation -- last time I wrote BASIC code it was with the QBASIC that came with DOS 5.0, later compiled in full Quick BASIC).
The methods you use to get things done in Modula-2 or Pascal (barring the strong typing, which protects against certain types of errors but makes it more complicated to do certain things) are exactly those used in non-OO C, and working in a language with different syntax might (probably will, IMO) make it easier to learn the concepts without your "programming reflexes" kicking in and trying to do OO operations in a nearly-familiar language.
It has always struck me as strange that the C function "fopen" takes a "const char *" as the second argument. I would think it would be easier to both read your code and implement the library's code if there were bit masks defined in stdio.h, like "IO_READ" and such, so you could do things like:
FILE* myFile = fopen("file.txt", IO_READ | IO_WRITE);
Is there a programmatic reason for the way it actually is, or is it just historic? (i.e. "That's just the way it is.")
I believe that one of the advantages of the character string instead of a simple bit-mask is that it allows for platform-specific extensions which are not bit-settings. Purely hypothetically:
FILE *fp = fopen("/dev/something-weird", "r+,bs=4096");
For this gizmo, the open() call needs to be told the block size, and different calls can use radically different sizes, etc. Granted, I/O has been organized pretty well now (such was not the case originally — devices were enormously diverse and the access mechanisms far from unified), so it seldom seems to be necessary. But the string-valued open mode argument allows for that extensibility far better.
On IBM's mainframe MVS o/s, the fopen() function does indeed take extra arguments along the general lines described here — as noted by Andrew Henle (thank you!). The manual page includes the example call (slightly reformatted):
FILE *fp = fopen("myfile2.dat", "rb+, lrecl=80, blksize=240, recfm=fb, type=record");
The underlying open() has to be augmented by the ioctl() (I/O control) call or fcntl() (file control) or functions hiding them to achieve similar effects.
One word : legacy. Unfortunately we have to live with it.
Just speculation : Maybe at the time a "const char *" seemed more flexible solution, because it is not limited in any way. A bit mask could only have 32 different values. Looks like a YAGNI to me now.
More speculation : Dudes were lazy and writing "rb" requires less typing than MASK_THIS | MASK_THAT :)
Dennis Ritchie (in 1993) wrote an article about the history of C, and how it evolved gradually from B. Some of the design decisions were motivated by avoiding source changes to existing code written in B or embryonic versions of C.
In particular, Lesk wrote a 'portable
I/O package' [Lesk 72] that was later
reworked to become the C `standard
I/O' routines
The C preprocessor wasn't introduced until 1972/3, so Lesk's I/O package was written without it! (In very early not-yet-C, pointers fit in integers on the platforms being used, and it was totally normal to assign an implicit-int return value to a pointer.)
Many other changes occurred around 1972-3, but the most important was the introduction of the preprocessor, partly at the urging of Alan Snyder [Snyder 74]
Without #include and #define, an expression like IO_READ | IO_WRITE wasn't an option.
The options in 1972 for what fopen calls could look in typical source without CPP are:
FILE *fp = fopen("file.txt", 1); // magic constant integer literals
FILE *fp = fopen("file.txt", 'r'); // character literals
FILE *fp = fopen("file.txt", "r"); // string literals
Magic integer literals are obviously horrible, so unfortunately the obviously most efficient option (which Unix later adopted for open(2)) was ruled out by lack of a preprocessor.
A character literal is obviously not extensible; presumably that was obvious to API designers even back then. But it would have been sufficient (and more efficient) for early implementations of fopen: They only supported single-character strings, checking for *mode being r, w, or a. (See #Keith Thompson's answer.) Apparently r+ for read+write (without truncating) came later. (See fopen(3) for the modern version.)
C did have a character data type (added to B 1971 as one of the first steps in producing embryonic C, so it was still new in 1972. Original B didn't have char, having been written for machines that pack multiple characters into a word, so char() was a function that indexed a string! See Ritchie's history article.)
Using a single-byte string is effectively passing a char by const-reference, with all the extra overhead of memory accesses because library functions can't inline. (And primitive compilers probably weren't inlining anything, even trival functions (unlike fopen) in the same compilation unit where it would shrink total code size to inline them; Modern style tiny helper functions rely on modern compilers to inline them.)
PS: Steve Jessop's answer with the same quote inspired me to write this.
Possibly related: strcpy() return value. strcpy was probably written pretty early, too.
I must say that I am grateful for it - I know to type "r" instead of IO_OPEN_FLAG_R or was it IOFLAG_R or SYSFLAGS_OPEN_RMODE or whatever
I'd speculate that it's one or more of the following (unfortunately, I was unable to quickly find any kind of supporting references, so this'll probably remain speculation):
Kernighan or Ritchie (or whoever came up with the interface for fopen()) just happened to like the idea of specifying the mode using a string instead of a bitmap
They may have wanted the interface to be similar to yet noticeably different from the Unix open() system call interface, so it would be at once familiar yet not mistakenly compile with constants defined for Unix instead of by the C library
For example, let's say that the mythical C standard fopen() that took a bitmapped mode parameter used the identifier OPENMODE_READONLY to specify that the file what today is specified by the mode string "r". Now, if someone made the following call on a program compiled on a Unix platform (and that the header that defines O_RDONLY has been included):
fopen( "myfile", O_RDONLY);
There would be no compiler error, but unless OPENMODE_READONLY and O_RDONLY were defined to be the same bit you'd get unexpected behavior. Of course it would make sense for the C standard names to be defined the same as the Unix names, but maybe they wanted to preclude requiring this kind of coupling.
Then again, this might not have crossed their minds at all...
The earliest reference to fopen that I've found is in the first edition of Kernighan & Ritchie's "The C Programming Language" (K&R1), published in 1978.
It shows a sample implementation of fopen, which is presumably a simplified version of the code in the C standard library implementation of the time. Here's an abbreviated version of the code from the book:
FILE *fopen(name, mode)
register char *name, *mode;
{
/* ... */
if (*mode != 'r' && *mode != 'w' && *mode != 'a') {
fprintf(stderr, "illegal mode %s opening %s\n",
mode, name);
exit(1);
}
/* ... */
}
Looking at the code, the mode was expected to be a 1-character string (no "rb", no distinction between text and binary). If you passed a longer string, any characters past the first were silently ignored. If you passed an invalid mode, the function would print an error message and terminate your program rather than returning a null pointer (I'm guessing the actual library version didn't do that). The book emphasized simple code over error checking.
It's hard to be certain, especially given that the book doesn't spend a lot of time explaining the mode parameter, but it looks like it was defined as a string just for convenience. A single character would have worked as well, but a string at least makes future expansion possible (something that the book doesn't mention).
Dennis Ritchie has this to say, from http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
In particular, Lesk wrote a 'portable
I/O package' [Lesk 72] that was later
reworked to become the C `standard
I/O' routines
So I say ask Mike Lesk, post the result here as an answer to your own question, and earn stacks of points for it. Although you might want to make the question sound a bit less like criticism ;-)
The reason is simple: to allow the modes be extended by the C implementation as it sees fit. An argument of type int would not do that The C99 Rationale V5-10 7.19.5.3 The fopen function says e.g. that
Other specifications for files, such as record length and block size, are not specified in the Standard due to their widely varying characteristics in different operating environments.
Changes to file access modes and buffer sizes may be specified using the setvbuf function
(see §7.19.5.6).
An implementation may choose to allow additional file specifications as part of the mode string argument. For instance,
file1 = fopen(file1name, "wb,reclen=80");
might be a reasonable extension on a system that provides record-oriented binary files and allows
a programmer to specify record length.
Similar text exists in the C89 Rationale 4.9.5.3
Naturally if |ed enum flags were used then these kinds of extensions would not be possible.
One example of fopen implementation using these parameters would be on z/OS. An example there has the following excerpt:
/* The following call opens:
the file myfile2.dat,
a binary file for reading and writing,
whose record length is 80 bytes,
and maximum length of a physical block is 240 bytes,
fixed-length, blocked record format
for sequential record I/O.
*/
if ( (stream = fopen("myfile2.dat", "rb+, lrecl=80,\
blksize=240, recfm=fb, type=record")) == NULL )
printf("Could not open data file for read update\n");
Now, imagine if you had to squeeze all this information into one argument of type int!!
As Tuomas Pelkonen says, it's legacy.
Personally, I wonder if some misguided saps conceived of it as being better due to fewer characters typed? In the olden days programmers' time was valued more highly than it is today, since it was less accessible and compilers weren't as great and all that.
This is just speculation, but I can see why some people would favor saving a few characters here and there (note the lack of verbosity in any of the standard library function names... I present string.h's "strstr" and "strchr" as probably the best examples of unnecessary brevity).
Here is the question, How did C (K&R C) look like? The question is about the first ten or twenty years of C's life?
I know, well I heard them from a prof in my uni, that C didn't have the standard libraries that we get with ANSI C today. They used to write IO routines in wrapped assembly! The second thing is that K&R book, is one the best books ever for a programmer to read, This is what my prof told us :)
I would like to know more about good ol' C. For example, what major difference you know about it compared to ANSI C, or how did C change programmers mind about programming?
Just for record, I am asking this question after reading mainly these two papers:
Evolving a language in and for the real world: C++ 1991-2006
A History of C++: 1979-1991
They are about C++, I know! thats why I wanna know more about C, because these two papers are about how C++ was born out of C. I am now asking about how it looked before that. Thanks Lazarus for pointing out to 1st edition of K&R, but I am still keen to know more about C from SO gurus ;)
Well, for a start, there was none of that function prototype rubbish. main() was declared thus:
/* int */ main(c,v)
int c;
char *v[];
{
/* Do something here. */
}
And there was none of that fancy double-slash comments either. Nor enumerations. Real men used #define.
Aah, brings a tear to my eyes, remembering the good old days :-)
Have a look at the 'home page' for the K&R book at Bell Labs, in particular the heading "The history of the language is traced in ``The Development of the C Language'', from HOPL II, 1993"
Speaking from personal experience, my first two C compilers/dev environments were DeSmet C (16-bit MS-DOS command line) and Lattice C (also 16-bit MS-DOS command line). DeSmet C came with its own text editor (see.exe) and libraries -- non-standard functions like scr_rowcol() positioned the cursor. Even then, however, there were certain functions that were standard, such as printf(), fopen() fread(), fwrite() and fclose().
One of the interesting peculiarities of the time was that you had a choice between four basic memory models -- S, P, D and L. Other variations came and went over the years, but these were the most significant. S was the "small" model, 16-bit addressing for both code and data, limiting you to 64K for each. L used 24-bit addressing, which was a 16-bit segment register and a 16-bit offset register to compute addresses, limiting you to 1024K of address space. Of course, in a 16-bit DOS world, you were confined to a physical limitation of 640K. P and D were compromises between the two modes, where P allowed for 24-bit (640K) code and 64K data, and D allowed for 64K code and 640K data addressing.
Wikipedia has some information on this topic.
Here is one example of the code that changed with ANSI C for the better:
double GetSomeInfo(x)
int x;
{
return (double)x / 2.0;
}
int PerformFabulousTrick(x, y, z)
int x, int y;
double z;
{
/* here we go */
z = GetSomeInfo(x, y); /* argument matching? what's that? */
return (int)z;
}
I first started working with C on VAX/VMS in 1986. Here are the differences I remember:
No prototypes -- function definitions and delcarations were written as
int main() /* no void to specify empty parameter list */
{
void foo(); /* no parameter list in declaration */
...
}
...
void foo(x,y)
int x;
double y;
{
...
}
No generic (void) pointer type; all of the *alloc() functions returned char * instead (which is part of why some people still cast the return value of malloc(); with pre-ANSI compilers, you had to);
Variadic functions were handled differently; there was no requirement for any fixed arguments, and the header file was named differently (varargs.h instead of stdarg.h);
A lot of stuff has been added to math.h over the years, especially in the C99 standard; '80s-vintage C was not the greatest tool for numerical work;
The libraries weren't standardized; almost all implementations had a version of stdio, math, string, ctype, etc., but the contents were not necessarily the same across implementations.
Look at the code for the Version 6 Unix kernel - that was what C looked like!
See Lion's Commentary on Unix 6th Edition (Amazon).
Also, it would be easier if you told us your age - your profile says you're 22, so you're asking about code prior to 1987.
Also consider: The Unix Programming Environment from 1984.
While for obvious reasons the core language came before the library, if you get hold of a first edition copy of K & R published in 1978 you will find the library very familiar. Also C was originally used for Unix development, and the library hooked into the I/O services of the OS. So I think your prof's assertion is probably apocryphal.
The most obvious difference is the way functions were defined:
VOID* copy( dest, src, len )
VOID* dest ;
VOID* src ;
int len ;
{
...
}
instead of:
void* copy( void* dest, void* src, int len )
{
...
}
for example. Note the use of VOID; K&R C did not have a void type, and typically VOID was a macro defined as int*. Needless to say, to allow this to work, the type checking in early compilers was permissive. From a practical point of view, the ability of C to validate code was poor (largely through lack of function prototypes and weak type checking), and hence the popularity of tools such a lint.
In 1978 the definition of the language was the K&R book. In 1989 it was standardised by ANSI and later by ISO, the 2nd edition is no longer regarded as the language definition, and was based on ANSI C. It is still the best book on C IMO, and a good programming book in general.
There is a brief description on Wikipedia which may help. Your best bet is to get a first edition copy of K&R, however, I would not use it to learn C, get a 2nd ed. for that.
I started using C in the early 1980's. The key difference I've seen between now and then was that early C did not have function prototypes, as someone noted. The earliest C I ever used had pretty much the same standard library as today. If there was a time when C didn't have printf or fwrite, that was before even my time! I learned C from the original K&R book. It is indeed a classic, and proof that technically sophisticated people can also be excellent writers. I'm sure you can find it on Amazon.
You might glance at the obfuscated C contest entries from the time period you are looking for.
16 bit integers were quite common in the ol' days.