#include <stdio.h>
#define max_size 100
float array[max_size];
int n,counter;
int main(){
printf("Enter the size of the array...\n");
scanf("%d",&n);
for (counter=0; counter<n; counter++){
printf("%p\t",&array[counter]);
}
printf("\n\n");
return 0;
}
I am just experimenting with this C program and I am trying to verify that the size of a float is 8 bytes. But upon running this code with 5 elements in array, I get the address of these elements as following:
Enter the size of the array...
5
0x555555755040 0x555555755044 0x555555755048 0x55555575504c 0x555555755050
As you can see for the first float number, my system has allocated memory space ...40,41,42,43 which is 4 bits of space if I am not wrong. But the float data type is supposed to have 8 bytes of space for it. I am thinking that the program should have allocated memory space ...40,41,...4F for 2 bytes of space. So
...40-...4F //for first 2 bytes
...50-...5F //for second 2 bytes
...60-...6F //for third 2 bytes
...70-...7F //for last 2 bytes
So the second address would start at ...80. But this is not the result I am obtaining. What am I missing in this process? Thank you for the help !!
C standard does not say anything about the storage size of float and it has been purposely left out to the implementer.
Maybe on your system and compiler the size is 4. You can check that out by using sizeof(float). See also this discussion.
Related
I'm trying to write a function that when given an array and a value, it checks if the value is in that array. If it is there then keep finding a new unique random value before adding it to the array. This is what I have done so far but I think the problem is my lack of understanding of pointers. Here is what I have so far:
#include <stdio.h>
#include <stdlib.h>
int getNewIndex(int index, int *visitedPixels, int *visitedPixelsIndex);
int main() {
int *visitedPixels = malloc(2 * sizeof(int));
int *visitedPixelsIndex = 0;
srand(1);
int randIndex = rand() % 16, i;
printf("Initial randIndex = %d\n", randIndex);
for(i = 0; i < 16; i++) {
randIndex = getNewIndex(randIndex, visitedPixels, visitedPixelsIndex);
printf("randIndex[%d] = %d\n", i, visitedPixels[i]);
}
return 0;
}
int getNewIndex(int index, int *visitedPixels, int *visitedPixelsIndex) {
int i = 0;
while (i < *visitedPixelsIndex) {
(index == visitedPixels[i]) ? index = rand() % 16, i = 0 : i++;
}
visitedPixels[*visitedPixelsIndex] = index;
(*visitedPixelsIndex)++;
//(*visitedPixels) = realloc(visitedPixels, (*visitedPixelsIndex+1) * sizeof(int));
return index;
}
Any help would be appreciated.
Okay, so. I'm going to try to explain with a metaphor. Hopefully it helps rather than confusing more.
Imagine memory is a long board you can write numbers on. It takes an inch of board to write a small number. Bigger numbers can be represented by writing across more slots.
An array, in our metaphor, is just a contiguous length of board you can write stuff into. If you want an array of 5 integers, and each integer takes 4 inches, you'll need 20 inches of board for it. If you wanted to pass all these integers to a function, instead of copying them all across, you would instead write down how many inches from the end of the board your array is. That's what a pointer is. It's a number telling where something is.
When you called malloc( 2 * sizeof( int ) ), you requested for a segment of the board big enough for two integers, and you received how many inches from the end of the board that new segment is. So we've got 8 inches of board X inches from the end, with X being our pointer.
Incrementing a pointer says "increase this value to point at the next element of the underlying array". A int* will increase by 4, a pointer to a structure by the size of the structure plus any alignment offset the compiler has decided for it.
It does not increase the amount of storage.
If I have a pointer to two 8 inches of board, write a 4 inch number, increment the pointer to point 4 inches more in, write another 4 inch number and increment again, my pointer is now right after the last element of the array. If I write here, all bets are off. What was on the board after the array? Who knows. It could be anything. Maybe it was a different array. Maybe it was information for keeping track of what parts of the board have been handed out to the program. Maybe it was the end of my board and I'll write off the end. Writing to memory you haven't received permission to from the operating system is where signals for "segment violations", SIGSEGV, program failures come from.
You need to request more space up front, or bigger arrays as you need them. There's also a realloc that will do this too. And for all of them, you have to check if the call failed and terminate or otherwise recover appropriately.
Hopefully this is more helpful than confusing. Good luck :)
I was trying to make an array that contains Fibonacci numbers in C, but I got into trouble. I can't get all of the elements, and some of the elements are wrongly calculated, and I don't know where I am I going wrong.
#include <stdio.h>
int main(void){
int serie[]={1,1},sum=0,size=2;
while(size<=4000000){
serie[size]=serie[size-1]+serie[size-2];
printf("%d\n",serie[size-1]);
size+=1;
}
return 0;
}
Output:
1
2
4
6
11
17
28
45
73
118
191
309
500
809
1309
2118
3427
5545
8972
14517
23489
38006
61495
99501
160996
260497
421493
681990
1103483
1785473
2888956
4674429
7563385
12237814
19801199
32039013
51840212
83879225
135719437
219598662
355318099
574916761
930234860
1505151621
-1859580815
-354429194
2080957287
1726528093
-487481916
1239046177
751564261
1990610438
-1552792597
437817841
-1114974756
-677156915
-1792131671
1825678710
33547039
1859225749
1892772788
-542968759
1349804029
806835270
-2138327997
-1331492727
825146572
-506346155
318800417
-187545738
131254679
-56291059
74963620
18672561
93636181
112308742
205944923
318253665
524198588
842452253
1366650841
-2085864202
-719213361
1489889733
770676372
-2034401191
-1263724819
996841286
-266883533
729957753
463074220
1193031973
1656106193
-1445829130
210277063
-1235552067
-1025275004
2034140225
1008865221
-1251961850
-243096629
-1495058479
-1738155108
1061753709
-676401399
385352310
-291049089
94303221
-196745868
-102442647
-299188515
-401631162
-700819677
-1102450839
-1803270516
1389245941
-414024575
975221366
561196791
1536418157
2097614948
-660934191
--------------------------------
Process exited after 2.345 seconds with return value 3221225477
Press any key to continue . . .
I don't understand why it is giving that output.
int serie[]={1,1}
Declares an array of two elements. As the array has two elements and indices start from zero, it has valid indices - 0 and 1, ie. serie[0] is the first element and serie[1] is the second element.
int size=2;
while(..) {
serie[size]= ...
size+=1;
}
As size starts 2, the expression serie[2] = is invalid. There is no third element in the array and it writes to an unknown memory region. Executing such an action is undefined behavior. There could be some another variable there, some system variable, or memory of another program or it can spawn nasal demons. It is undefined.
If you want to store the output in an array, you need to make sure the array has enough elements to hold the input.
And a tip:
int serie[4000000];
may not work, as it will try to allocate 40000000 * sizeof(int), which assuming sizeof(int) = 4 is 15.2 megabytes of memory. Some systems don't allow to allocate that much memory on stack, so you should move to dynamic allocation.
You're having an integer overflow because the int size is ,at a certain leverl, not big enough to hold the numbers, so the number is wrapping round the size and giving false values.
Your program should be like:
#include <stdio.h>
int main(void){
long long unsigned series[100] = {1,1};
int size = 2;
while(size < 100){
series[size] = series[size-1] + series[size-2];
printf("%llu\n", series[size-1]);
size += 1;
}
return 0;
}
Although, size of long long unsigned is also limited, at a certain level, with such very big numbers in Fibonacci. So this will result in more correct numbers printed, but also will overflow at a certain level. It will overflow when the number exceeds this constant ULLONG_MAX declared in limits.h.
The problem with this code:
#include <stdio.h>
int main(void){
int serie[]={1,1},sum=0,size=2;
while(size<=4000000){
serie[size]=serie[size-1]+serie[size-2];
printf("%d\n",serie[size-1]);
size+=1;
}
return 0;
}
... is that it attempts to store a very long series of numbers (4 million) into a very short array (2 elements). Arrays are fixed in size. Changing the variable size has no effect on the size of the array serie.
The expression serie[size]=... stores numbers outside the bounds of the array every time it's executed because the only legal array index values are 0 and 1. This results in undefined behavior and to be honest you were lucky only to see weird output.
There are a couple of possible solutions. The one that changes your code the least is to simply extend the array. Note that I've made it a static rather than automatic variable, because your implementation probably won't support something of that size in its stack.
#include <stdio.h>
int serie[4000000]={1,1};
int main(void){
int size=2;
while(size<4000000){ // note strict less-than: 4000000 is not a valid index
serie[size]=serie[size-1]+serie[size-2];
printf("%d\n",serie[size-1]);
size+=1;
}
return 0;
}
The more general solution is to store the current term and the two previous terms in the series as three separate integers. It's a little more computationally expensive but doesn't have the huge memory requirement.
#include <limits.h>
#include <stdio.h>
int main(void)
{
int term0=0, term1=1, term2;
while(1)
{
if (term0 > INT_MAX - term1) break;// overflow, stop
term2 = term0 + term1;
printf("%d\n",term2);
term0 = term1;
term1 = term2;
}
return 0;
}
This also has the benefit that it won't print any numbers that have "wrapped around" as a result of exceeding the limits of what can be represented in an 'int`. Of course, you can easily choose another data type in order to get a longer sequence of valid output.
You have two problems:
You need to allocate more space in serie, as much as you are going
to use
Eventually the fib numbers will become too big to fit inside an integer, even a 64bit unsigned integer (long long unsigned), i think 90 or so is about max
See the modified code:
#include <stdio.h>
// Set maximum number of fib numbers
#define MAX_SIZE 90
int main(void) {
// Use 64 bit unsigned integer (can't be negative)
long long unsigned int serie[MAX_SIZE];
serie[0] = 1;
serie[1] = 1;
int sum = 0;
int size = 0;
printf("Fib(0): %llu\n", serie[0]);
printf("Fib(1): %llu\n", serie[1]);
for (size = 2; size < MAX_SIZE; size++) {
serie[size] = serie[size-1] + serie[size-2];
printf("Fib(%i): %llu\n", size, serie[size]);
}
return 0;
}
As you are only printing out the numbers, you don't actually have to store all of them
(only the two previous numbers), but it really doesn't matter if there's only 90.
This question already has answers here:
Getting a stack overflow exception when declaring a large array
(8 answers)
Closed 4 years ago.
I want to write a program in which I want to initialize integer array of size 987654321 for storing values of 1 and 0 only,
here is my program
#include <stdio.h>
#include <stdlib.h>
int main(){
int x,y,z;
int limit = 987654321;
int arr[limit];
for (x = 0;x < limit;x++){
printf("%d \n",arr[x]);
}
return 0;
}
but it gives segmentation fault
987654321 is certainly too big for a local variable.
If you need a dynamically sized array of that size you need to use malloc like:
int limit = 987654321;
int *arr = malloc(limit * sizeof(*arr));
if (arr == NULL)
{
... display error message and quit
}
...
free(arr); // free it once you're dont with the array
BTW are you aware that your array uses roughly 4 gigabytes of memory assuming the size of int is 4 on your platform?
Since you want to store values of 1 and 0 only, and these values require only one bit, you can use a bit array instead of an integer array.
The size of int is 4 bytes (32 bits) usually, so you can reduce the memory required by a factor of 32.
So instead of about 4 GB, you will only need about 128 MB of memory. Resources on how to implement a bit array can be found online. One such implementation is here.
I'm doing a program that read the whole binary file (in my case, a picture), reading 2 by 2 bytes and printing how many times each byte (from 0000h to FFFFh) appear in the file, but I'm having an issue regarding the quantity of zeros I'm capturing. Don't know exactly if it's a common case, but I'm feeling there's something wrong by allocating the array. There's something like:
bit occuurrences
0 62354
1 13
2 4
3 5
4 2
5 2
6 0
7 2
. .
. .
65535 0
What do you guys think I'm doing wrong?
Follow the code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
unsigned short int *n;
int cont=0;
long lsize,i,j;
FILE *arq=fopen("C:\\Users\\NB\\Documents\\Testes Allegro\\Trabalho PAQ\\imagens\\426.png","rb");
FILE *out=fopen("saida.csv","w");
fseek(arq,0,SEEK_END);
lsize=ftell(arq);
rewind(arq);
n=(unsigned short int*)malloc(sizeof(unsigned short int)*lsize);
fread(n,sizeof(short int),lsize,arq);
fprintf(out,"bit,quantidade\n");
for(i=0x0000;i<=0xffff;i++)
{
for(j=0;j<lsize;j++)
{
if(n[j]==i)
cont++;
}
fprintf(out,"%li,%d\n",i,cont);
printf("%li,%d-",i,cont);
cont=0;
}
fclose(arq);
fclose(out);
free(n);
return 0;
}
You are allocating short ints but measuring & reading bytes. Your array n will not work in the way you want. You will need to size n to lsize/2 times (+ an edge case for odd sizes) and adjust your loop accordingly.
Your looping is very inefficient too!
I am writing in C this simple program, using xcode.
#include <stdio.h>
int main()
{
double A[200][200][2][1][2][2];
int B[200][200][2][1][2];
int C[200][200][2][1][2];
double D[200][200][2][1][2];
double E[200][200][2][1][2];
double F[200][200][2][1][2];
double G[200][200];
double H[200][200][2];
double I[200][200];
double L[50];
printf("%d",B);
return 0;
}
I get the following message attached to printf("%d",B);
Thread 1: EXC_BAD_ACCESS (code=2, address= ….)
So basically is telling me that I messed up with the memory. How can that be possible?
BUT, if I comment
// int C[200][200][2][1][2];
it works perfectly.
Any clue? It should not be a problem with Xcode since in Eclipse does not printf anything in any case.
The default stack size on Mac OS X is 8 MiB (8,192 KiB) — try ulimit -s or ulimit -a in a terminal.
You have an array of doubles running at about 2.5 MiB (200 x 200 x 2 x 1 x 2 x 2 x sizeof(double)). You have 3 other arrays of double that are half that size; you have 2 arrays of int that are 1/4 the size. These add up to 7.560 MB (7.4 MiB). Even G, H and I are using a moderate amount of stack space: in aggregate, they're as big as D, so they use about 1 MiB.
The sum of these sizes is too big for the stack. You can make them file scope arrays, or you can dynamically allocate them with malloc() et al.
Why on earth would you have a dimension of [1]? You can only ever write 0 as a valid subscript, but then why bother?
I'm not quite sure why you observe EXC_BAD_ACCESS. But your code is quite broken. For a start, you pass B to printf and ask it to format it as an integer. It is not an integer. You should use %p if you want to treat it as a pointer.
The other problem is that your local variables will be allocated on the stack. And they are so big that they will overflow the stack. The biggest is A which is sizeof(double)*200*200*2*1*2*2 which is 2,560,000 bytes. You cannot expect to allocate such large arrays on the stack. You'll need to switch to using dynamically allocated arrays.