I used the following code to find it out but I always get 1 as the answer. is there something wrong. Thanks
#include <stdio.h>
#include <stdlib.h>
int main(){
int mult = 0;
int chk =8;
do{
mult+=1;
int *p = (int*)malloc(1024*1024*1024*mult);
if(p==0){
chk =0;
}else{
free(p);
}
}while(chk !=0);
mult = mult -1;
printf("The number of gigs allocated is : %d\n",mult);
return 0;
}
Just to help, I have a 64 bit system with both windows and linux installed. Thus, is the above logic correct even though I am getting just 1 gb as the answer on a 64 bit system?
If it is a 32-bit OS, then it is not surprising that the largest contiguous block would be 1GB (or somewhere between that and 2GB). On a 64-bit OS, larger blocks would be possible.
If you change your code to allocate smaller individual pieces, you will likely be able to allocate more than 1GB total.
These questions might help you a bit: How much memory was actually allocated from heap for an object? and How do I find out how much free memory is left in GNU C++ on Linux
int main(void){
int MB = 0;
while(malloc(1<<30)){
++MB;
}
printf("The number of gigs allocated is : %d\n",MB);
return EXIT_SUCCESS;
}
Related
I don't know how to ask this question as it was little confusing to me. i was having a problem with this code
#include <stdio.h>
#include <stdlib.h>
#define ull unsigned long long
#define SIZE 1000000001
#define min(a,b) ((a<b?a:b))
#define max(a,b) ((a>b?a:b))
int solve(void) {
// unsigned *count = malloc(sizeof(unsigned) * SIZE);
int k;
scanf("%d", &k);
unsigned count[SIZE];
for (ull i = 0; i < SIZE; i++){
count[i] = 0;
}
return 0;
}
int main(void){
unsigned t;
if (scanf("%u", &t) != 1) return 1;
while (t-- > 0){
if (solve() != 0) return 1;
}
}
This code for me is giving segfault.
What my observation is
it is running fine until it is in solve function.
on calling solve function it is giving segfault.
It has nothing to do with scanf("%d", &k) as by removing this line gives the same error
But if we decrease the SIZE value it will run fine.
Other thing which i can do is instead of creating an array on stack i can use heap and this is working fine.
If i only declare array count in solve function instead of taking k as input and initializing all the values of array count to 0. i am not getting any segfault
So i have some questions regarding this.
Is this due to memory limitation to array or because of memory limitation for a stack frame for the function solve (or possibly another reason which i can't find).
If this is due to any kind of memory limitation than isn't it is too low for a program?
How compiler checks for such errors as adding any kind of print statement won't run before array declaration as i am getting segfault when program reaches solve. So compiler somehow knows that their is a problem with code without even getting there.
and specifically for the 6th point, as per my knowledge when declaring array it reserves memory for the array. So by initializing it i am doing nothing which will increase the size of array. So why i am not getting any kind of error when declaring array while i am getting segfault when i am initializing all those values in array
Maybe i am seeing it in totally wrong way but this is how i think it is, So please if you know any reason for this please answer me about that too
It depends on your operating system. On Windows, the typical maximum size for a stack is 1MB, whereas it is 8MB on a typical modern Linux, although those values are adjustable in various ways.
For me it's working properly check with other platform or other system.
Though this is strange, but I am getting a segmentation fault while scanning an integer value.
Here is my program :
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
int t,n,i,j;
char c;
int grid[1000][1000],right[1000][1000],down[1000][1000];
scanf("%d",&t);
printf("hello\n");
while(t--)
{
scanf("%d",&n);
memset(right, 0, sizeof(int) * 1000 *1000);
memset(down, 0, sizeof(int) * 1000 *1000);
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
scanf("%c",&c);
printf("CHAR = %c\n", c);
if(c == '.')
grid[i][j] = 1;
else
grid[i][j] = 0;
}
}
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
printf("%d",grid[i][j]);
}
}
}
return 0;
}
Doing gdb shows segmentation fault at line scanf("%d",&t);. I cannot figure out how this is happening?
[Using gcc-4.8.4 on a linux 32-bit machine ]
The problem is that your arrays: grid, right and down are too big to fit into the stack.
As far as the reason for no compile error is concerned:
Because there is nothing wrong with this code in terms of syntax or semantics. The linker also does not have any problem.
The problem arises when the loader tries to load the program and allocate that much memory on the stack. The stack is usually 8 MB on linux systems and your arrays surpass that.
You can make them static (as suggested in the comments) as static members are allocated on the bss or data segment. But in reality you need to rethink if you need such big arrays.
Set you linker to instruct the loader to allocate a max stack segment limit that is large enough to fit your big local array.
#include <stdio.h>
void main()
{
char *s= "hello";
char *p = s;
printf("%c\t%c", p[0], s[1]);
}
output of this program is : h e
Can anyone please explain how this program is working? I'm relatively new to c..
p[0] is identical to *(p+0) , similarly goes for s[1] . [] always operates on a pointer and is same for arrays and pointers.
Note - There is no array declared in your program.
Please note the following facts first( They are neutral to programming LANGUAGE )
Any pointer has/ takes memory equals to size of your systems data bus
even void* takes size equals size of your systems data bus
Now size of data bus is size of processors data fetching/manipulating capacity, you might heard 32 Bit processor, 64 Bit processor
Finally processors data fetching/manipulating capacity equals size of your int, that's why we use following code to calculate Architecture of CPU
#include<stdio.h>
int main(){
if(sizeof(int)==2) {
printf("\n 16 Bit Architecture, may be using DOS & Turbo C++ IDE");
}else if(sizeof(int)==4) {
printf("\n 32 Bit Architecture");
}else {
printf("\n 64 Bit Architecture");
}
return 0;
}
I am trying to get code that was working on Linux to also work on my Windows 7.
When I retried the same code, it crashed with stack overflow. I then removed everything I could to find out the line which is causing it to crash, and it left me with this:
#include <stdio.h>
#include <stdlib.h>
#include <cuda_runtime.h>
/* 256k == 2^18 */
#define ARRAY_SIZE 262144
#define ARRAY_SIZE_IN_BYTES (sizeof(float) * (ARRAY_SIZE))
int main(void)
{
float a[ARRAY_SIZE] = { };
float result = 0;
printf("sum was: %f (should be around 1 300 000 with even random distribution)\n", result);
return 0;
}
If I change ARRAY_SIZE to 256, the code runs fine. However with the current value, the float a[ARRAY_SIZE] line crashes runtime with stack overflow. It doesn't matter if I use float a[ARRAY_SIZE]; or float a[ARRAY_SIZE] = { };, they both crash the same way.
Any ideas what could be wrong?
Using Visual Studio 2010 for compiling.
Ok, the stack sizes seem to be explained here, saying 1M is the default on Windows.
Apparently it can be increased in VS 2010 by going Properties -> Linker -> System -> Stack Reserve Size and giving it some more. I tested and the code works by pumping up the stack to 8M.
In the long run I should probably go the malloc way.
Your array is too large to fit into the stack, try using the heap:
float *a;
a = malloc(sizeof(float) * ARRAY_SIZE);
Segmentation fault when allocating large arrays on the stack
Well, let me guess. I've heard default stack size on Windows is 1 MB. Your ARRAY_SIZE_IN_BYTES is exactly 1 MB btw (assuming float is 4 bytes). So probably that's the reason
See this link: C/C++ maximum stack size of program
I have written the code for the sieve but the program runs for only array size less than or equal to 1000000. For the rest of the cases which are larger, a simple SIGSEGV occurs. Can this be made to run cases > 1000000. Or where am I wrong?
#include <stdio.h>
int main()
{
unsigned long long int arr[10000001] = {[0 ... 10000000] = 0};
unsigned long long int c=0,i,j,a,b;
scanf("%llu%llu",&a,&b);
for(i=2;i<=b;i++)
if(arr[i] == 0)
for(j=2*i;j<=b;j+=i)
arr[j] = 1;
for(i=(a>2)?a:2;i<=b;i++)
if(arr[i] == 0)``
c++;
printf("%llu",c);
return 0;
}
This line allocates memory on the stack (which is a limited resource)
unsigned long long int arr[10000001] = {[0 ... 10000000] = 0};
If you are allocating 10,000,000 entries at 4 bytes each, that is 40 million bytes, which will be more than your stack can handle.
(or, on your platform, there is a good chance that a long-long-int is 8 or more bytes, indicating 80 million bytes in use!)
Instead, allocate the memory from the heap, which is much more plentiful:
int* arr = malloc(10,000,000 * sizeof(int)); // commas for clarity only. Remove in real code!
Or, if you want the memory initialized to zero, use calloc.
Then at the end of your program be sure you also free it:
free(arr);
PS The syntax {[0 ... 10000000] = 0}; is needlessly verbose.
To initialize an array to zero, simply:
int arr[100] = {0}; // Thats all!
You declared an array that can hold 10000001 items; if you want to handle larger numbers, you need a bigger array. I'm mildly surprised that it works for 1000000 already - that's a lot of stack space to be using.
Edit: sorry - didn't notice you had a different number of zeroes there. Don't use the stack to allocate your array and you should be fine. Just add static to the array declaration and you'll probably be okay.