'system' was not declared in this scope error - c

So i dont get this error in other programs but i did get it in this.
This program is an example where i dont get the error.
#include<stdio.h>
int main() {
system("pause");
} // end main
but in this program below i get the error
#include <stdio.h>
//#include <stdlib.h>
// Takes the number from function1, calculates the result and returns recursively.
int topla (int n) {
if(n == 1)
return 3;
else
return topla(n-1) + topla(n-1) + topla(n-1);
}
// Takes a number from main and calls function topla to find out what is 3 to the
// power of n
int function1(int n) {
return topla(n);
}
int main() {
int n; // We us this to calculate 3 to the power of n
printf("Enter a number n to find what 3 to the power n is: ");
scanf("%d", &n);
function1(n);
system("pause");
} // end main

Just include stdlib.h, but don't use system("pause") as it's not standard and will not work on every system, just a simple getchar() (or a loop involving getchar() since you've used scanf()) should do the trick.
And normally system("pause") is found in windows command line programs because windows command prompt closes when the program exits, so maybe running the program from the command prompt directly would help, or using an IDE that fixes this like geany.
Finally always check the return value if scanf() instead of assuming that it worked.
Note: This code
return topla(n - 1) + topla(n - 1) + topla(n - 1)
you can write as
return 3 * topla(n - 1);
instead of calling topla() recursively 3 times.
And you don't really need the else because the function returns unless the n != 1 so even without the else the recursion will stop when n == 1.

The system function is declared in the standard header <stdlib.h>. If your program calls system(), you must have
#include <stdlib.h>
at or near the top of your source file.
But part of your question is: why didn't the compiler complain when you omitted the #include directive?
The 1990 C standard (sometimes called "ANSI C") permits calls to functions that have not been explicitly declared. If you write, for example:
system("pause");
with no visible declaration for the system function, it would be assumed that system is declared with a return type of int and parameters matching the arguments in the call -- in this case, a single argument of type char*. That happens to be consistent with the actual declaration of system, so with a C90 compiler, you can get away with omitting the #include directive. And some C compilers that support the more current 1999 and 2011 standards (which don't permit implicit declarations) still permit the old form, perhaps with a warning, for the sake of backward compatibility.
Even given a C90 compiler, there is no advantage to depending on the now obsolete "implicit int" rule. Just add the #include <stdlib.h>. More generally, for any library function you call, read its documentation and #include the header that declares it.
As for why you got an error with one of your programs and not another, I don't have an explanation for that. Perhaps you invoked your compiler with different settings. In any case, it doesn't really matter -- though you might look into how to configure your compiler so it always warns about things like this, so you can avoid this kind of error.

Here you need to know about two things.
Firstly, your code works absolutely fine and the program really finds the value of 3^n. So do not worry about that.
Coming to the system() part,
In order to use the system(); function, you need to include the stdlib.h header file, as the function is declared in that header.
So it is a good practice to include the header (rather than commenting it).
Now, the pause keyword is used in windows, to stop the console from closing after the completion of the program and it is only for windows.
Note that, system("pause"); is also not a standard, and it does not work on other machines, namely linux as, with the system command, you are directly interacting with the command line. In this regard, the commands for each operating system are specific, and they cannot be used for other OS.
so it is better that you use getchar(); , a C standard library function, to hold the console window.

Related

Why do we need return type and parameter type in funtion declarations? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
This post was edited and submitted for review 6 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I've been told we somehow need them so the compiler can continue onwards without having read the definition yet.
Somehow we need them in order for the program to work properly, to avoid conflicts between functions.
Please explain.
If you're going to have separate compilation, meaning that the call versus the definition of a function might be in separate source files, compiled at different times, perhaps months or years apart, you need to decide how information passed to and returned from the function is going to be coordinated.
Let's cover the return value first. Functions can be declared to return int, or double, or char *, or just about any of C's other types. (There are one or two exceptions, types which it's impossible to return, but they needn't concern us here.) But for any given ABI, different values get returned in different ways. Integers and pointers might get returned in a general-purpose processor register. Floating-point values might get returned in a floating-point register. Structures might get returned in some specially-designated memory area. In any case, the called function is going to do the returning, but the calling function (compiled separately) is going to do something with the returned value, so it has to know how to emit the right code to do that. And the only way for it to know (again, under C's separate-compilation model) is the function declaration that indicated the return type. If the calling code thought the function was going to return an int, but the called function actually returned a double, it just wouldn't work, because the called function would place its return value in the spot where return values of type double go, and the calling code would fetch a value from the place where return values of type int go, and it would get indeterminate garbage instead.
Now let's talk about the arguments passed to the function. Again, the mechanisms behind argument passing can take several forms, depending on the type(s) of the argument(s). Again, mismatches are easy, but can cause serious problems. If the programmers writing calling code could be relied on to always pass the correct number of arguments, and of the correct types, we wouldn't need function prototypes — and indeed, that's how C was for the first few years of its life. These days, however, that risk is generally considered as unacceptable, and function prototypes are considered mandatory — by the language standard, by compilers, and by most C programmers. (You might still find a few holdouts, grumbling that prototypes are newfangled or unnecessary, but they're basically an ignorable minority.)
There's a remaining wrinkle concerning "varargs" functions such as printf. Since they don't accept a fixed number of fixed-type arguments, they can't have a prototype that specifies the types of those arguments in advance. An ellipsis ("...") in a function prototype indicates variable arguments that (a) can't be enforced by the compiler but that (b) do have the default argument promotions performed on them, to provide a little more regularity (basically, small types like char and int promoted to int, and float promoted to double). Varargs functions, too, are generally consider old-school and risky if not downright dangerous, and are not recommended for new designs, unless perhaps if they follow the pattern of printf, meaning that the compiler can peek at the format string (if constant) and do the programmer the favor of double-checking the arguments.
The requirements here have been evolving ever since C was invented, and (as discussed in the comments) are still changing. If you have specific questions about what's legal and what's not, what works and what won't, you might want to ask them explicitly, but the answer may depend on which version of C we're talking about.
We can get a good picture of most of the issues just by looking at the standard sqrt function. Suppose you say
double x = sqrt(144);
Once upon a time, that was doubly wrong. Once upon a time, without a function declaration in scope, sqrt would be assumed to return int, meaning that this call wouldn't work, because sqrt actually returns a double. (The compiler would emit code to fetch an int and convert it to the double required by x, but this would be meaningless since there's no int actually returned by sqrt to convert.) But, there's a second problem: sqrt accepts a double argument, which this code doesn't pass, so sqrt wouldn't even receive the correct value to take the square root of.
So, once upon a time, you absolutely needed
extern double sqrt();
(which you probably got from <math.h>) to tell the compiler that sqrt returned a double, and it was your responsibility to call
double x = sqrt(144.);
or
double x = sqrt((double)144);
to cause a double value to be passed.
Today, if you call sqrt out of a clear sky (that is, without a declaration in scope), the compiler is more likely to complain that there's no prototype in scope — that is, it will not quietly assume that sqrt returns int. And if there is a declaration in scope, it will be the prototype declaration
extern double sqrt(double);
which explicitly says that sqrt expects one argument of type double. So, today, the code
double x = sqrt(144);
works fine — the compiler knows to implicitly convert the int value 144 to double before passing it to sqrt.
If you did something really wrong, like calling sqrt(44, 55), in the old days you'd get no complaints (and a very wrong answer), while today you'll get an error saying you've passed too many arguments.
This is probably a longer answer than you were looking for, but in closing, there are two final points to make:
No one would claim that any of this makes perfect sense, or is the way you'd design things from scratch, today. We're living (as ever) with a number of imperfect compromises between modernity and backwards compatibility.
The "guarantees" apparently promised by function prototypes — namely, that automatic argument conversions will be performed if possible, and that compile-time errors will be emitted for gross mismatches — are not absolute, and still depend on a certain amount of programmer care, namely to ensure that the prototypes (upon which everything depends) are actually correct. This generally means putting them in .h files, included both by the caller and the defining source file. See also this question exploring how those vital conventions might be enforced.
Because C Compilers compile code row from top to bottom. If you gonna use any function in your main function you have two ways to do that.
You can code your function first then use main(Compiler code row from top to bottom so it will figure out there is a function)
Like you do you can use parameters to say "Hey there is a function that i made i will use that later" to Compiler.
Example:
Declaration without parameters. COMPILES.
Declaration without parameters and either without type. COMPILES BUT WARNING.
Call BEFORE definition (and no declaration). Compile error. ERROR.
Compiles perfectly (as #Vlad from Moscow pointed, neither parameters are needed in declaration):
1 #include <stdio.h>
2
3 int sum (); // no parameters declaration (if also omit type int, will compile but with warning
4
5 int main (void)
6 {
7 int numa = 2, numb = 3;
8
9 printf ("Sum %d + %d = %d\n", numa, numb, sum ( numa, numb ));
10
11 return 0;
12 }
13
14 int sum ( int numa, int numb )
15 {
16 return numa + numb;
17 }
output:
./ftype_def_dec
Sum 2 + 3 = 5
type int of function ommited (compiled but with warning):
gcc -o ftype_def_dec ftype_def_dec.c
ftype_def_dec.c:3:1: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
sum ();
^
1 warning generated.
Does not compile (if you omit declaration in a function called in main() BEFORE it's DEFINITION [every detail [types & what/how it does explicited]]):
1 #include <stdio.h>
2
3 // int sum ();
4
5 int main (void)
6 {
7 int numa = 2, numb = 3;
8
9 printf ("Sum %d + %d = %d\n", numa, numb, sum ( numa, numb ));
10
11 return 0;
12 }
13
14 int sum ( int numa, int numb )
15 {
16 return numa + numb;
17 }
gcc -o ftype_def_dec ftype_def_dec.c
ftype_def_dec.c:9:44: error: implicit declaration of function 'sum' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
printf ("Sum %d + %d = %d\n", numa, numb, sum ( numa, numb ));
^
1 error generated.
Because the compiler has to be able to know how your function can be called and used (i.e. what parameters it accepts and what its return type is (although the former can be omitted)) in other calling functions (such as in main).
As mentioned by another answer, the compiler will compile your program from top to bottom. Thus, when it reaches a given line in your program where you use your function, if you had not previously declared it, it would not know how to parse and compile that statement, and/or be able to detect if it consists of an error.
Take a simple example:
void square(int*);
int main(void) {
int number = 4;
number = square(number); // ERROR
// correct usage would be:
square(&number);
return 0;
}
void square(int *num) {
if (!num)
return;
*num *= *num;
}
However, if you had not declared square before main, the compiler would have no way of knowing what do with your call to square inside main, or which of the two calls was an error.
In other words, square is an undeclared identifier if you have not declared the function before calling it.
Just like you cannot do something like:
int main(void) {
int x = 2;
y = x + 5; // ERROR: undeclared identifier 'y'
int y;
return 0;
}
... because the compiler has no idea what y is by the time it reaches its first usage.

How does a forward declaration work?

I understand declaring factorial before main. But how can main calculate the answer when the factorial formula comes after it?
#include <stdio.h>
long long factorial(int);
int main()
{
int n;
long long f;
printf("Enter a number and I will return you its factorial:\n");
scanf_s("%d", &n);
if (n < 0)
printf("No negative numbers allowed!\n"); //prevent negative numbers
else
{
f = factorial(n);
printf("The factorial of %d is %ld\n", n, f);
}
return 0;
}
long long factorial(int n)
{
if (n == 0)
return 1;
else
return (n * factorial(n - 1));
}
But how can main calculate the answer when the factorial formula comes after it?
First thing — main does not calculate the answer; it's your factorial function which does it for you. Also there are 3 steps which I feel you need to know about when writing programs:
You write the code in a file.
You compile the file and the compiler checks for syntactical mistakes, no code calculation is happening in this phase its just mere lexical analysis.
Then linking takes place later. If you receive a linker error, it means that your code compiles fine, but that some function or library that is needed cannot be found. This occurs in what we call the linking stage and will prevent an executable from being generated. Many compilers do both the compiling and this linking stage.
Then when you actually run your code — it's then when the code's control flow goes into the factorial function when the calculation happens, i.e. at runtime. Use a Debugger to see this.
The below image is taken from Compiling, Linking and Building C/C++ Applications
From Wiki:
In computer programming, a forward declaration is a declaration of an
identifier (denoting an entity such as a type, a variable, a constant,
or a function) for which the programmer has not yet given a complete
definition....
This is particularly useful for one-pass compilers and separate
compilation. Forward declaration is used in languages that require
declaration before use; it is necessary for mutual recursion in such
languages, as it is impossible to define such functions (or data
structures) without a forward reference in one definition: one of the
functions (respectively, data structures) must be defined first. It is
also useful to allow flexible code organization, for example if one
wishes to place the main body at the top, and called functions below
it.
So basically the main function does not at all need to know how factorial works.
But how can main calculate the answer when the factorial formula comes after it?
The order in which a C program executes is only partially determined by the order in which the text appears.
For instance, look at the printf function you are using. That doesn't appear in your program at all; it is in a library which is linked to your program.
The forward declaration makes it known (from that point in the translation unit) that there is expected to exist such and such a function having a certain name, arguments and return value.
The simple answer is that your C program is processed from beginning to end before it begins to execute. So by the time main is called, the factorial function has already been seen and processed by the C compiler.
An address is assigned to the compiled factorial function, and that address is "backpatched" into the compiled main function when the program is linked.
The reason forward declarations are needed at all is that C is an old-fashioned language originally designed to allow one-pass compilation. This means that the compiler "translates as it goes": functions earlier in a translation unit are compiled and emitted before later functions are seen. To correctly compile a call to a function which appears later (or, generally, appears elsewhere, like in another translation unit), some information must be announced about that function so that it is known at that point.
It works in this manner: let's take an example to find factorial of 3
Recursion :
As factorial of 0 is 1 and factorial of 1 is also 1, so you can write like
if(n <= 1)
return 1;
Since in main function when compiler sees this f = factorial(n); function, the compiler has no idea of what it means. it doesn't know where the function being defined, but it does know the argument the function is receiving is correct, because it's a user defined function that has its definition after main function.
Hence there should be some way to tell the compiler that I am using a function with name factorial which returns long long with a single int argument; therefore you define a prototype of the function before main().
Whenever you call the function factorial the compiler cross checks with the function prototype and ensures correct function call.
A function prototype is not required if you define the function before main.
Example case where function prototyping is not required :
/*function prototyping is not required*/
long long factorial(int n)
{
//body of factorial
}
int main()
{
...
f=factorial(n);
...
}
Here, the compiler knows the definition of factorial; it knows the return type, the argument type, and the function name as well, before it is called in main.

Trying to understand usefulness of assert

What is the usefulness of assert(), when we can also use printf and if-else statements will inform the user that b is 0?
#include <stdio.h>
#include <assert.h>
int main() {
int a, b;
printf("Input two integers to divide\n");
scanf("%d%d", &a, &b);
assert(b != 0);
printf("%d/%d = %.2f\n", a, b, a/(float)b);
return 0;
}
There're two things here that people conflate. Validation of run-time errors and compile-time bugs.
Avoid a runtime error with if
Say the user is supposed to enter only numbers, but the input encountered is alphanumeric then it's a run-time error; there's nothing that you, the programmer, can do about it. You've to raise this to the user in a user-friendly, graceful way; this involves bubbling up the error to the right layer, etc. There are multiple ways of doing this: returning error value, throwing an exception, etc. Not all options are available in all languages.
Catch a bug with an assert
On the other hand, there're design-time assumptions and invariants a programmer depends on when writing new code. Say you're authoring a function make_unit_vec involving division of a vector’s components by its length. Passing the 0 vector breaks this function since it’ll lead to a divide-by-zero; an invalid operation. However, you don’t want to check this every time in this function, since you're sure none of the callers will pass the 0 vector -- an assumption, not about extern user input but, about internal systems within programmer’s control.
So the caller of this function should never pass a vector with length 0. What if some other programmer doesn’t know of this assumption and breaks make_unit_vec by passing a 0 vector? To be sure this bug is caught, you put an assert i.e. you're asserting (validating) the design-time assumption you made.
When a debug build is run under the debugger, this assert will "fire" and you can check the call stack to find the erring function! However, beware that assertions don’t fire when running a release build. In our example, in a release build, the div-by-zero will happen, if the erroneous caller isn’t corrected.
This is the rationale behind assert being enabled only for debug builds i.e. when NDEBUG is not set.
Asserts are not used to inform users about anything. Assertions are used to enforce invariants in your program.
In the above program, the assert is misused. The b != 0 is not an invariant. It is a condition to be checked at runtime and the proper way to do it would be something like
if (b == 0) {
sprintf(stderr, "Can't divide by 0\n");
return -1;
}
This means that the program will check user input and then abort if it's incorrect. The assert would be inappropriate because, one, it can be compiled out using the NDEBUG macro which will, in your case, alter program logic (if the input is 0, the program will continue to the printf with the NDEBUG flag) and two because the purpose of an assert is to assert a condition at a point in the program.
The example in the wikipedia article I've linked to gives you a place where an assert is the right thing to use. It helps to ensure program correctness and improves readability when someone is trying to read and understand the algorithm.
In your program if the condition b != 0 holds true then the program execution will continue otherwise it is terminated and an error message is displayed and this is not possible with printf only. Using if-else will work too but that is not a MACRO.
assert will stop the program, it marks a condition that should never occur.
From Wikipedia,
When executed, if the expression is false (that is, compares equal to 0), assert() writes
information about the call that failed on stderr and then calls abort(). The information it > writes to stderr includes:
the source filename (the predefined macro __FILE__)
the source line number (the predefined macro __LINE__)
the source function (the predefined identifier __func__) (added in C99)
the text of expression that evaluated to 0 [1]
Example output of a program compiled with gcc on Linux:
program: program.c:5: main: Assertion `a != 1' failed.
Abort (core dumped)
It stops the program and prevents it from doing anything wrong (besides abruptly stopping).
It's like a baked-in breakpoint; the debugger will stop on it without the extra manual work of looking up the line number of the printf and setting a breakpoint. (assert will still helpfully print this information though.)
By default it's defined to be conditional on the NDEBUG macro, which is usually set up to reflect the debug/release build switch in the development environment. This prevents it from decreasing performance or causing bloat for end users.
Note that assert isn't designed to inform the user of anything; it's only for debugging output to the programmer.
That is not a good use of assert; validating user input with assert is too brutal for anything other than your own private use.
This is a better example — it is a convenient one I happen to have kicking around:
int Random_Integer(int lo, int hi)
{
int range = hi - lo + 1;
assert(range < RAND_MAX);
int max_r = RAND_MAX - (RAND_MAX % range);
int r;
while ((r = rand()) > max_r)
;
return (r % range + lo);
}
If the value of RAND_MAX is smaller than the range of integers requested, this code is not going to work, so the assertion enforces that critical constrain.

On understanding how printf("%d\n", ( { int n; scanf("%d", &n); n*n; } )); works in C

I came across this program via a quora answer
#include<stdio.h>
int main() {
printf("%d\n", ( { int n; scanf("%d", &n); n*n; } ));
return 0;
}
I was wondering how does this work and if this conforms the standard?
This code is using a "GNU C" feature called statement-expressions, whereby a parentheses-enclosed compound statement can be used as an expression, whose type and value match the result of the last statement in the compound statement. This is not syntactically valid C, but a GCC feature (also adopted by some other compilers) that was added presumably because it was deemed important for writing macros which do not evaluate their arguments more than once.
You should be aware of what it is and what it does in case you encounter it in code you have to read, but I would avoid using it yourself. It's confusing, unnecessary, and non-standard. The same thing can almost always be achieved portably with static inline functions.
I don't believe it does work... n has local scope within the braces... when you exit the braces, n becomes undefined though I suppose is could still exist somewhere on the stack and might work. It's begging for implementation issues and I guarantee is implementation dependent.
One thing I can tell you is that anyone working for me who wrote that would be read the riot act.
It works. I get no warnings from gcc, so I suppose that it conforms to the standard.
The magic is the closure:
{ int n; scanf("%d", &n); n*n; }
This nugget scans an integer from the console (with no error checking) and squares it, returning the squared number. In ancient implementations of C, the last number on the stack is returned. The n*n puts the number on the stack.
That value gets passed to the printf:
printf("%d\n", <scanned>);
So, to answer your questions: Yes, it works. Yes, it's "standard" (to the extent that anyone follows the standard entirely). No, it's not a great practice. This is a good example of what I by knee-jerk reaction call a "compiler love letter", designed mostly to show how smart the programmer is, not necessarily to solve a problem or be efficient.

Printing control with C

when i execute the below code, the compiler returns the message "(.text+0x31): undefined reference to 'sqrt'". but if i take away the q* the compiler correctly gives me 8.000000
i'm trying to get the program to multiply the INCREMENT by 1 (and eventually 2 and 3 when i write the loop in).
how come the below doesn't work?
#include <stdio.h>
#include <math.h>
#define INCREMENT 64
int main ()
{
int q = 1;
printf("%f", sqrt(q*INCREMENT));
return 0;
}
You probably need to link to the math library. (though I thought visual C++ does this automatically...)
The reason why it works without the q is because the compiler is optimizing out the sqrt since it's a constant.
The code is correct c code. I tested it under vs 2010 and it returned the value 8. However it is not correct c++ code. sqrt becomes ambigueous when the argument is an integer. Is it possible that your source file has a .cpp extension, instead of a .c extension?

Resources