int main() {
int n;
long u=0,d=0,count=0,i=0;
char *p=(char *)malloc(sizeof(char)*n);
scanf("%ld",&n);
scanf("%s",p);
for(i=0;i<n;i++){
if(p[i]=='U'){
u=u+1;
}
if(p[i]=='D'){
d=d+1;
}
if((d-u)==0 && p[i]=='U'){
count=count+1;}
}
printf("%ld",count);
return 0;
}
In this standard syntax for implicit memory allocation, if i replace "int n;" with "long int n;"
An error pops up saying:
GDB trace:
Reading symbols from solution...done.
[New LWP 10056]
Core was generated by `solution'.
Program terminated with signal SIGSEGV, Segmentation fault.
I have searched everywhere for a solution, rather i quite dont know what to search for,
i would be greatful if anyone helps me out. Thanks :)
(This was executed on an online compiler)
There are a couple of things that I would like to point out:
First of all, you do not have to declare n as "long int". "long int" and "long" are the same. So,
long int n; //is same as
long n;
malloc() works perfectly fine whether n is an "int" or a "long". However, you don't seem to have initialized n. What is the value of n? C does not perform auto-initialization of variables and n might have a garbage value (even negative) which might cause your program to crash. So please give a value to n.
long n = 10; //example
or use a scanf() to input a value.
Now in your code, what is scanf() doing "after" malloc? I presume that you intended to read a value for n and then pass it to malloc. So please change the order of code to this:
scanf("%ld",&n);
char *p=(char *)malloc(sizeof(char)*n);
I ran your program with these changes on my system and it works fine (no segmentation fault)
malloc() limits: We know that malloc allocates from a heap. But I really don't see malloc returning NULL on current platforms (which are generally 64 bit). However, if you do try to allocate a very large chunk of memory, malloc might return NULL which will cause your program to crash.
So it's good to check the return value for malloc() and if that's NULL then take appropriate actions (such as retry or exit the program)
Having a check like the one below will always help:
if (p == NULL) {
printf("Malloc error");
exit(1);
}
Extracting the relevant parts of your code:
int n;
char *p=(char *)malloc(sizeof(char)*n);
The parameter to malloc is of type size_t, which is an unsigned type. If you pass an argument of any other integer type, it will be implicitly converted to size_t.
You report that with int n; you don't see a problem, but with long int n; your program dies with a segmentation fault.
In either case, you're passing an uninitialized value to malloc(). Just referring to the value of an uninitialized object has undefined behavior.
It may be that the arbitrary long int value you're passing to malloc() happens to cause it to fail and return a null pointer, causing a segmentation fault later when you try to dereference the pointer; the arbitrary int value might just happen to cause malloc to succeed. Checking whether malloc succeeded or failed would likely avoid the segmentation fault.
Passing an uninitialized value to malloc() is a completely useless thing to do. The fact that it behaves differently depending on whether that uninitialized value is an int or a long int is not particularly significant.
If you're curious, you might add a line to print the value of n before calling malloc(), and you definitely should check whether malloc() reported failure by returning a null pointer. Beyond that, you know the code is incorrect. Don't waste too much time figuring out the details of how it fails (or, worse, why it sometimes doesn't fail). Just fix the code by initializing n to the number of bytes you actually want to allocate. (And define n as an object of type size_t.)
Some more points:
The code in your question is missing several required #include directives. If they're missing in your actual code, you should add them. If they're present in your actual code, you should have included them in your question. Don't make assumptions about what you can safely leave out.
int main() should be int main(void). (This is a minor point that probably doesn't make any practical difference.)
scanf("%s",p);
This is inherently dangerous. It reads a blank-delimited string that can be arbitrarily long. If the user enters more characters than the buffer p points to can hold, you have undefined behavior.
u=u+1;
Not incorrect, but more idiomatically written as u ++;.
(d-u)==0 is more clearly and safely written as d == u. (For extreme values of d and u the subtraction can overflow; an equality comparison doesn't have that problem.)
Related
I was trying to induce an ENOMEM error just to satisfy my own curiosity, so I decided to run the following code:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
double POS_INF = 1.0/0.0;
char *s = malloc(POS_INF);
}
To my suprise, it compiled and ran successfully. Shouldn't there have been an ENOMEM error or some other type of error?
I decided to run valgrind to see what was going on. On the sixth line of the valgrind output, it showed the following:
Argument 'size' of function malloc has a fishy (possibly negative) value: -9223372036854775808
This output really confuses me. Does it think that the POS_INF variable, which is meant to be equal to positive infinity, is actually -9223372036854775808?
Any help on this would be highly appreciated.
Function malloc() as defined in <stdlib.h> takes an argument of type size_t. Passing a double invokes an implicit conversion from double to size_t, which is only defined if double value is finite and the integral part in within the range of type size_t.
Converting Infinity is hence undefined. Note that computing 1.0/0.0 is not defined on all platforms, but does evaluate to a positive infinity on IEEE-754 conforming architectures such as most modern systems.
As a consequence, your code has undefined behavior. It seems converting a positive infinite value (if your system supports it) to the type unsigned long for which size_t is a typedef on your system produces the value 0x8000000000000000 (9223372036854775808 in decimal, reported as -9223372036854775808 if converted as a signed value), which malloc() reports as fishy... I tend to agree :). Other systems might behave differently, possibly terminating your program abruptly.
Malloc detects this argument value as fishy because programmers passing negative values to malloc() actually pass huge unsigned values as converted implicitly to unsigned type size_t. This explains the error message and the signed conversion. It must be a common enough mistake to warrant issuing a runtime warning to stderr, but it leaves the user clueless about something of interest only to the programmer.
Here is a modified version:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
double POS_INF = 1.0/0.0;
char *s = malloc(POS_INF);
printf("(size_t)%g -> %zu\n", POS_INF, (size_t)POS_INF);
printf("malloc(%zu) -> %p\n", (size_t)POS_INF, (void *)s);
if (s == NULL) {
printf("errno=%d %s (%s)\n", errno,
errno == ENOMEM ? "(ENOMEM)" : "",
strerror(errno));
}
free(s);
return 0;
}
Running this program multiple times gives a good illustration of undefined behavior: the conversion gives different values in successive occurrences (but produces a value small enough for malloc() to allocate):
chqrlie$ make 220929-malloc
clang -O3 -Weverything -Wno-padded -o 220929-malloc 220929-malloc.c
220929-malloc.c:8:22: warning: implicit conversion turns floating-point number into integer: 'double' to 'unsigned long'
[-Wfloat-conversion]
char *s = malloc(POS_INF);
~~~~~~ ^~~~~~~
1 warning generated.
chqrlie$ ./220929-malloc
(size_t)inf -> 4194304
malloc(3520768) -> 0x7fbc69400350
How come it's possible to malloc() an infinite amount of memory in C?
It is not possible to malloc an infinite amount of memory in C.
Firstly, remember that programming in general (and C in particular) is not pure mathematics. The concept of "infinity" does not necessarily even exist.
It's true, IEEE-754 has a distinct and meaningful value for "infinity". (It even has two: positive and negative.) And, it's also true, the expression 1.0/0.0 will generate this value.
However, malloc does not take a double. So the concept, "malloc an infinite amount of memory" is quiet meaningless. malloc actually takes an argument of type size_t, so the only values you can (attempt to) malloc are values that can be expressed as type size_t.
Now, C supports implicit conversions, including from double to size_t, so the compiler will attempt to do that. But I don't think the conversion of double to size_t is perfectly defined (or can be). I can't remember, but I'm pretty sure the rule is that if the truncated double value can't be properly represented as a value of type size_t, the result is undefined. So malloc(1024.0) is fine, and malloc(1024.1) is probably also fine, but malloc(1e100) is not fine, and malloc(1.0/0.0) is right out.
And then, finally, you expressed surprise that malloc didn't fail, but in fact, maybe it did: since your program didn't check, you have no way of knowing!
The last line of your test program was
char *s = malloc(POS_INF);
So you assign malloc's result to a pointer variable s, but then throw it away and exit. Since you don't do anything with s, it wouldn't be too wrong for the compiler to throw away the whole line — that is, to not call malloc at all — since you obviously didn't care about the return value, and would have no way of knowing if the compiler cheated and didn't call it.
Suppose you add the lines
if(s == NULL) {
perror("malloc");
exit(1);
}
at the end of the program. Now you're actually in a position to determine whether malloc failed.
After that there are three possibilities:
malloc fails, because the converted size_t value (whatver it is) asks for more memory than is available.
malloc succeeds, because the converted size_t value (whatver it is) asks for memory that is available.
malloc seems to succeed, because although the converted size_t value asks for more memory than is available, you're on a system that overcommits memory, and only fails when you try to use it.
So to really see what's going on, you might want to change the code to
size_t sz = POS_INF;
char *s = malloc(sz);
if(s == NULL) {
printf("can't malloc %zu bytes: %s\n", sz, strerror(errno));
exit(1);
}
so that you can actually see what size_t value you got. And to be really sure, you can follow that up with
s[0] = 1;
s[sz/2] = 2;
s[sz-1] = 3;
to see if you actually got memory you can use.
First thing, in case memory allocation functions (malloc() and family) fails to allocate the memory, they return a null pointer. They don't create some fault or error on their own. You need to check for the validity (non-null pointer in this case) after the call to a(ny) library function as a best practice.
That said, malloc() accepts an argument of type size_t. Passing a double is not allowed.
This is my first question here, so I apologize in advance.
Whenever it executes it just takes one number as input and then terminates.
Can't, we use this logic to find out the greatest and smallest number among any numbers??
#include<stdio.h>
void main()
{
int *p,n,i,max,min;
printf("How many numbers?= ");
scanf("%d",&n);
printf("\nStart entering numbers\n");
for(i=0;i<n;i++){
scanf("%d",(p+i));
}
max=*(p+0);
for(i=0;i<n;i++)
{
if(*(p+i)>max)
{max=*(p+i);}
}
printf("Maximum number = %d\n",max);
min=*(p+0);
for(i=0;i<n;i++)
{
if(*(p+i)<min)
min=*(p+i);
}
printf("Minimum number = %d",min);
}
Whenever it executes, it just takes one number as input and then terminates.
Can't,we use this logic to find out the greatest and smallest number among any numbers?
In your code you have a pointer to a int called p that points to nothing, it is empty.
Then you call scanf() and read into the empty pointer p.
scanf("%d",(p+i));
This will not work since p points to nothing and when scanf() tries to store something inside it, it will most likely lead to a segmentation fault or undefined behaviour.
To fix it you could allocate memory for p with malloc():
int *p = malloc(sizeof(int) * n);
if (p == NULL)
{
fprintf(stderr, "malloc failed");
// error procedure
}
This will create an array capable of storing n elements of type int.
Notes:
Your code uses a lot of *(p+i)s, this could cause confusion and lead to error prone code, it is best that you use p[i] instead.
Your main function has return type void, instead you should use int so you can return a error code of some sort if something fails.
You should test the return value of scanf to see if it failed to read something from the user.
You have undefined behaviour here: You're declaring a pointer p and start assigning values not only to the memory location of p but also of subsequent pointers p+1, p+2 etc. But you've never checked if the system allocated available memory to you. You may be lucky and get a pointer to a memory address that has sufficient contiguous memory available to hold all the values that follow, but you can't rely on that.
A better way would be:
int n,i,max,min;
if (!(scanf("%d",&n))) return;
printf("\nStart entering numbers\n");
int *p = malloc(sizeof(int)*n);
if (p == NULL) return;
Essentially, you want to use malloc() (for which you will have to #include <stdlib.h> to allocate as much memory as you need for the numbers. For good practice, you also want to check whether the scanf and the malloc worked out right, which is the job of the two if statements that terminate the program in case something went wrong. If not, you can be sure to have the memory you need, and the program can go on.
In terms of notation, you can use *(p+1) etc if you want to, but it's more common (and more readable) to use p[1], and in fact the C standard requires the two to be equivalent.
If the number of p members is very small, I (personally) wouldn't do the job with malloc(), and then (one souldn't forget!) free();
I would just do something like
int p[100];
instead of *p, and to refer like p[0] or p[i] to them instead of using pointers.
I am learning C and am a bit confused about why I don't get any warnings/errors from GCC with the following snippet. I am allocating space of 1 char to a pointer to int, is it some changes done by GCC (like optimizing the allocated space for an int silently)?
#include <stdlib.h>
#include <stdio.h>
typedef int *int_ptr;
int main()
{
int_ptr ip;
ip = calloc(1, sizeof(char));
*ip = 1000;
printf("%d", *ip);
free(ip);
return 0;
}
Update
Having read the answers below, would it still be unsafe and risky if I did it the other way around, e.g. allocating space of an int to a pointer to char? The source of my confusion is the following answer in the Rosetta Code, in the function StringArray StringArray_new(size_t size) the coder seems to exactly be doing this this->elements = calloc(size, sizeof(int)); where this->elements is a char** elements.
The result of calloc is of the type void* which implicitly gets converted to an int* type. The C programming language and GCC simply trust the programmer to write sensible casts and thus do not produce any warnings. Your code is technically valid C, even though it produces an invalid memory write at runtime. So no, GCC does not implicitly allocate space for an integer.
If you would like to see warnings of this kind before running (or compilation), you may want to use, e.g., Clang Static Analyzer.
If you would like to see errors of this kind at runtime, run your program with Valgrind.
Update
Allocating space for 1 int (i.e. 4 bytes, generally) and then interpreting it as a char (1 char is 1 byte) will not result in any memory errors, as the space required for an int is larger than the space required for a char. In fact, you could use the result as an array of 4 char's.
The sizeof operator returns the size of that type as a number of bytes. The calloc function then allocates that number of bytes, it is not aware of what type will be stored in the allocated segment.
While this does not produce any errors, it can indeed be considered a "risky and unsafe" programming practice. Exceptions exist for advanced applications where you´d want to reuse the same memory segment for storing values of a different type.
The code on Rosetta Code you linked to contains a bug in exactly that line. It should allocate memory for a char* instead of an int. These are generally not equal. On my machine, the size of an int is 4 bytes, while the size of a char* is 8 bytes.
C has very little type safety and malloc has none. It allocates exactly as many bytes as you tell it to allocate. It's not the compiler's duty to warn about it, it is the programmer's duty to get the parameters right.
The reason why it "seems to work" is undefined behavior. *ip = 1000; might as well crash. What is undefined behavior and how does it work?
Also you should never hide pointers behind typedef. This is very bad practice and only serves to confuse the programmer and everyone reading the code.
The compiler only cares that you pass the right number and types of arguments to calloc - it doesn’t check to see if those arguments make sense, since that’s a runtime issue.
Yes, you could probably add some special case logic to the compiler when both arguments are constant expressions and sizeof operations like in this case, but how would it handle a case where both arguments are runtime variables like calloc( num, size );?
This is one of those cases where C assumes you’re smart enough to know what you’re doing.
Compiler only check Syntax, not Semantic.
Your code's Syntax is OK. But Semantic not.
I am trying to understand a portion of code. I am leaving out a lot of the code in order to make it simpler to explain, and to avoid unnecessary confusion.
typedef void *UP_T;
void FunctionC(void *pvD, int Offset) {
unsigned long long int temp;
void *pvFD = NULL;
pvFD = pvD + Offset;
temp = (unsigned long long int)*(int *)pvFD;
}
void FunctionB(UP_T s) {
FunctionC(s, 8);
}
void FunctionA() {
char *tempstorage=(char *)malloc(0);
FunctionB(tempstorage);
}
int main () {
FunctionA();
return 0;
}
Like I said, I am leaving out a ton of code, hence the functions that appear useless because they only have two lines of code.
What is temp? That is what is confusing me. When I run something similar to this code, and use printf() statements along the way, I get a random number for pvD, and pvFD is that random number plus eight.
But, I could also be printing the values incorrectly (using %llu instead of %d, or something like that). I am pretty sure it's a pointer to the location in memory of tempstorage plus 8. Is this correct? I just want to be certain before I continue under that assumption.
The standard specifies that malloc(0) returns either NULL or a valid pointer, but that pointer is never to be dereferenced. There aren't any constraints regarding the actual implementation, so you can't rely on the returned pointer being another plus 8.
It's random in the sense that malloc is typically non-deterministic (i.e. gives different results from run to run).
The result of malloc(0) is implementation-defined (but perfectly valid), you just shouldn't ever dereference it. Nor should you attempt to do arithmetic on it (but this is generally true; you shouldn't use arithmetic to create pointers beyond the bounds of the allocated memory). However, calling free on it is still fine.
I'm having trouble with my code and hope you could help. When I input an odd number I'm given a segmentation fault, and a bus error if it's even. I'm trying to add 00's to a data array to bring it from length Nprime to a new, larger length Ndprime that I input. I'm doing this in a function *fpad, where my paddata array contains Nprime complex numbers (i.e. 2*Nprime components), and needs to be brought up to size 2*Ndprime.
double *fpad(double *paddata, unsigned int Nprime, unsigned int Ndprime)
{
if (Nprime!=Ndprime)
{
paddata=(double*)realloc(paddata,(sizeof(double)*((2*Ndprime)-1)));
for(i>=((2*Nprime));i<(2*Ndprime);i++) paddata[i]=0;
if(paddata==NULL) /* Checks memory is reallocated */
{
printf("\nError reallocating memory.\n");
free(paddata);
exit(EXIT_FAILURE);
}
}
return(paddata);
}
ANy help would be appreciated, I can't see what I'm doing wrong.
You are using an undeclared variable i (or maybe it is a global).
for(i>=((2*Nprime));i<(2*Ndprime);i++) paddata[i]=0;
Your first condition checks whether i is smaller than or larger than 2*Nprime (but does not set i). It then goes around accessing the array using this not-properly-initialized value of i that could be negative, which would lead to problems.
You only check whether the memory reallocation succeeded after the loop diagnosed as problematic above. If the memory allocation fails, you've carefully zapped the original copy of the pointer in this function. There is no point in freeing the null pointer — but since you exit on allocation failure, there isn't too much of a problem.
Put your initialization loop after the memory check, with slightly less exuberance in the number of parentheses:
for (int i = 2*Nprime; i < 2*Ndprime; i++) // C99 (and C++)
paddata[i] = 0.0;
If you can't use C99 notation, declare int i; in the function.
Don't create global variables called i, ever.
Do pay attention to your compiler's warnings. If it wasn't warning you about 'statement with no effect', you haven't turned on enough warnings.
I recommend the function memset function to init your dynamic array.I think the index 'i' in the 'for' statement should range from 0 to 2*Ndprime-2.