The code below is me trying to instantiate a 2d array, and it instantiating incorrectly:
THE CODE:
FILE* kernalFile = fopen(argv[1], "r");
int rKernalSize;
fscanf(kernalFile, "%d", &rKernalSize);
unsigned int rKernal[rKernalSize][rKernalSize];
Data froma break point right after that code is ran:
rKernalSize VALUES:
Name : rKernalSize
Details:3
Default:3
Decimal:3
Hex:0x3
Binary:11
Octal:03
rKernal VALUES:
Name : rKernal
Details:0x7ffffffe0cd0
Default:[0]
Decimal:[0]
Hex:[0]
Binary:[0]
Octal:[0]
or
rKernal[][0]
It should be rKernal[3][3] and here is the file so you can look at it. If wanted:
3 -1 1 0 1 0 -1 0 -1 1 3 -1 1 0 1 0 -1 0 -1 1 3 -1 1 0 1 0 -1 0 -1 1
TLDR: rKernalSize is correct (3) but when I create the 2d Array with rKernal[rKernalSize][rKernalSize] it does not instantiate correctly! It instatiates as rKernal[][0] maybe thats default but should be rKernal[3][3]
Forget what the debugger is telling you. In your code, immediately following:
unsigned int rKernal[rKernalSize][rKernalSize];
put the statement:
printf ("%d\n", sizeof(rKernal) / sizeof(unsigned int));
and see what it prints out (hopefully 9).
It's possible the debugging information, created at compile time, is not enough to properly determine the sizes of variable length arrays.
By way of example, even though gcc supports variable length arrays, gdb itself couldn't properly handle them as recently as late 2009 and there's still no mention of them in the documentation dated October 2010.
So I suspect that is the problem, especially since the test code I provided above output 9 as expected.
The basic problem here is that rKernalSize isn't known at compile time when that array is being allocated. At compile time, the value of int rKernalSize is compiler-dependent (unless the new C standard explicitly makes it 0; 0's the value I'd bet on anyway.) So when the code is loaded, there's a symbol rKernal that represents the address of a secrtion of memory containing no bytes.
But then you run the program, and read '3' with your scanf; when you dumpt the results, you see '3'.
This code, by the way, wouldn't work in straight C -- you must allocate before the first executable statement. This will compile in C++.
Now, if you want, using straight C, to do something like this, here's what you need:
Read your size as you have using scanf.
Allocate the memory for your array using malloc, which will look something like
x
int ** ary;
int rkSize; // what you scanf'd into
if((ary = malloc(rkSize*rkSize*sizeof(unsigned int)))==NULL){
// for some reason your malloc failed. You can't do much
fprintf(stderr,"Oops!\n");
exit(1);
}
// If you got here, then you have your array
Now, because of the pointer-array duality in C, you can treat this as your array
x
ary[1][1] = 42; // The answer
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have wrote an binary file using c and checked it using xxd with following output:
6161 6161 6161 6161 6161 6161 6161 6161
Now my Code :
FILE * read;
int * filebytes;
read = fopen("data/myfile","rb");
fread(filebytes,4,4,read);
for(int x = 0; x < var; x++){
printf("%x\n",filebytes[x]);
}
fclose(read);
An im getting that output:
61616161
61616161
61616171
61616161
Now comes the even weirder thing, if i now not read out 4 times 32 bit but 5 times the last byte of the 3 row is not 71 but 75 ,reading out 6 times 32 bit 79 and so forth (every time adding 4). I cant think of a reason why that would happen every time at the 3th element im reading.
I would like to know how to read out a file into 32 pieces and not getting weird changes into my code.
Happy about every type of help.
"man fread" says this:
#include <stdio.h>
size_t fread(void *BUF, size_t SIZE, size_t COUNT, FILE *FP);
where:
BUF - pointer to output memory
SIZE - number of bytes in 1 "element"
COUNT - number of 'elements' to read
FP - open file pointer to read from
Here you have SIZE=4, COUNT=4, so you are trying to read 16 bytes.
But you are reading into memory location pointed to by filebytes - but this is currently just a random number, so could be pointing to anywhere in memory.
As a result, your fread() command could:
end up crashing as the memory location 'filebytes' could point outside the address space
end up pointing to memory that other code will change later (eg as in your case)
just by sheer chance not be used by any other part of the code, and by luck work.
filebytes needs to point to valid memory, eg:
on the heap: filebytes= (int *) malloc(16);
or on the stack, by making filebytes an local array to the function like:
int filebytes[16];
Note that sizeof(int) might vary per machine/architecture, so "int filebytes[16] will allocate at least 16 bytes (as int will be at least 1 byte, normally a minimum of 2.)
(Note also that 'var' is not defined in your example - this should be defined. But even with var defined, this might not make sense:
for(int x = 0; x < var; x++){
printf("%x\n",filebytes[x]);
}
because you haven't said what you are trying to do.
E.g. how many bytes in the input file represent 'one integer' ? This could be any number theoretically.
And if say 4 bytes represent 'one integer' then is the most significant byte first, or the least significant byte first ?
And note that your machine might be a big-endian or litte-endian machine, so for example if you run on a little-endian machine (least significant byte first), but your file has most-significant byte first, then loading directly into the integer array will not give the correct answer. Or even if they are the same, if you later run the same program on a different machine (with the opposite endian-ness), it will start breaking.
You need to decide:
how many bytes are in your integers in the file ?
are they most-significant byte first, or least significant byte first ?
Then you could load into a 'unsigned char' array, and then manually build the integer from that. (That will work no matter endian of the machine.)
I wrote a program in C that does some different calculations with numbers passed through the command-line. For some reason, the result of Average() tends to be something large (and occasionally negative, I'll include a log file) and Std_Dev() tends to print 0. I ahve the code in a repository on GitHub:
https://github.com/Jordan-Effinger/Data-Analysis
A quick notice about the files: in the repository is a file called type.h. That file is not used in my current Build so if you don't see anything defined in there used just a heads up.
Example Results:
main 0 0 0 0 0
Calculating average
Calculating std-deviation
-141545200 1195973704
main 0 0 0 0 0
Calculating average
Calculating std-deviation
-1030105488 1003883182
main 0 0 0 0 0
Calculating average
Calculating std-deviation
1478538976 1111766907
Any thoughts? I think something's going wrong when the functions are returning the result - but I've used these functions before and I didn't have this kind of problem...
Edit #1:
I realized both functions have a problem with zero. That is something I will have to work on. I looked throught he comments, implemented a few changes there and found a few changes of my own. I won't include the whole functions, just go over the changes.
file: main.c
I dynamically allocated space for Data[], Sorted[] using malloc:
` float *Data = (float *) malloc( (data_count + 3 ) * sizeof(float) );
In all of the functions ( and their prototypes), I have arrays declared as float * and am passing the data_count variable as a size reference (I'm not quite comfortable with sizeof() in most instances).
file: std_dev.c:
In the for loop I changed
sum += pow( Data[data_count] - average, 2 );
to
sum += pow( Data[index] - average, 2 );
I'm going to run some tests, implement the rest of the calculations, and then see what I can do to fix the issue with zero values.
Thank you for your input!
--Jordan.
I am relatively sure I see a few errors:
When calculating standard deviation, you use the parameter data_count in the loop body rather than idx. That will never work.
You are using Data[data_count] as the parameter for your array in both the average and standard deviation functions. If you're using C you probably just want float *Data. I am pretty sure Data[data_count] is simply wrong here. Possibly float Data[] might be correct. EDIT: it has been pointed out in the comments that this syntax can actually be correct if the compiler supports it. Check to make sure your compiler supports this and, if so, no changes should be needed.
When you call the average and standard deviation functions, you are passing Data[data_count]. I am almost sure this must be wrong; Data[data_count] is the (data_count+1)'th element of Data, an array of size data_count; so it's not even defined and, if it were, the type is still wrong. I suggest simply passing Data here.
I typically work in C++ so these comments may be off the mark but if C is like C++ in these respects then these are definitely issues to look at.
Why does this code below compiles and executes but doesn't print anything in output,
int i = 0;
printf(i = 0);
but this gives a runtime error,
int i = 0;
printf(i = 1);
int i = 0;
printf(i = 0);
The first argument to printf must be a char* value pointing to a format string. You gave it an int. That's the problem. The difference in behavior between printf(i = 0) and printf(i = 1) is largely irrelevant; both are equally wrong. (It's possible that the first passes a null pointer, and that printf detects and handles null pointers somehow, but that's a distraction from the real problem.)
If you wanted to print the value of i = 0, this is the correct way to do it:
printf("%d\n", i = 0);
You have a side effect in the argument (i = 0 is an assignment, not a comparison), which is legal but poor style.
If you have the required #include <stdio.h>, then your compiler must at least warn you about the type mismatch.
If you don't have #include <stdio.h>, then your compiler will almost certainly warn about calling printf without a declaration. (A C89/C90 compiler isn't strictly required to warn about this, but any decent compiler should, and a C99 or later compiler must.)
Your compiler probably gave you one or more warnings when you compiled your code. You failed to include those warnings in your question. You also failed to show us a complete self-contained program, so we can only guess whether you have the required #include <stdio.h> or not. And if your compiler didn't warn you about this error, you need to find out how to ask it for better diagnostics (we can't help with that without knowing which compiler you're using).
Expressions i = 0 and i = 1 in printf function will be evaluated to 0 and 1 (and i will be initialized to 0 and 1) respectively. So above printf statements after their expression evaluation will be equivalent to
printf(0); // To be clear, the `0` here is not a integer constant expression.
and
printf(1);
respectively.
0 and 1 both will be treated as address in printf statements and it will try to fetch string from these addresses. But, both 0 and 1 are unallocated memory addresses and accessing them will result in undefined behavior.
printf requires a const char * for input, whereas you're giving it an int
printf awaits a format string as its first argument. As strings (char*) are nothing else than pointers, you provide an address as the first parameter.
After an address and an integer are implicitly converted into each other,
printf(i = 0)
is (from the perspective of printf) the same as:
printf( 0 )
which is the same as
printf( NULL ) // no string given at all
If you provide 0 as parameter, printf will gracefully handle that case and don't do anything at all, because 0 is a reserved address meaning nothing at all.
But printf( 1 ) is different: printf now searches for a string at address 1, which is not a valid address for your program and so your program throws a segmentation fault.
[update]
The main reason why this works is a combination of several facts you need to know about how char arrays, pointers, assignments and printf itself work:
Pointers are implicitly convertible to int and vice versa, so the int value 17 for example gets converted to the memory address 0x00000011 (17) without any further notice. This is due to the fact that C is very close to the hardware and allows you to calculate with memory addresses. In fact, internally an int (or more specific: one special type of an integer) and a pointer are the same, just with a different syntax. This leads to a lot of problems when porting code from 32bit to 64bit-architecture, but this is another topic.
Assignments are different from comparations: i = 0 and i == 0 are totally different in meaning. i == 0 returns true when i contains the value 0 while i = 0 returns... 0. Why is that? You can concatenate for example the following variable assignments:
a = b = 3;
At first, b = 3 is executed, returning 3 again, which is then assigned to a. This comes in handy for cases like testing if there is more items in an array:
while( (data = get_Item()) )
This now leads to the next point:
NULL is the same as 0 (which also is the same as false). For the fact that integers and pointers can be converted into each other (or better: are physically the same thing, see my remarks above), there is a reserved value called NULL which means: pointing nowhere. This value is defined to be the virtual address 0x00000000 (32bit). So providing a 0 to a function that expects a pointer, you provide an empty pointer, if you like.
char* is a pointer. Same as char[] (there is a slight logical difference in the behaviour and in the allocation, but internally they are basically the same). printf expects a char*, but you provide an int. This is not a problem, as described above, int will get interpreted as an address. Nearly every (well written) function taking pointers as parameters will do a sanity check on these parameters. printf is no exclusion: If it detects an empty pointer, it will handle that case and return silently. However, when providing something different, there is no way to know if it is not really a valid address, so printf needs to do its job, getting interrupted by the kernel when trying to address the memory at 0x00000001 with a segmentation fault.
This was the long explanation.
And by the way: This only works for 32-bit pointers (so compiled as 32-bit binary). The same would be true for long int and pointers on 64-bit machines. However, this is a matter of the compiler, how it converts the expression you provide (normally, an int value of 0 is implicitly cast to a long int value of 0 and then used as a pointer address when assigned, but vice versa won't work without an explicit cast).
Why does this code below compiles and executes but doesn't print anything in output?
printf(i = 0);
The question embodies a false premise. On any modern C compiler, this clearly-erroneous code generates at least one error.
That is my code:
#include<stdio.h>
int main()
{
int vet[10], i;
for(i=30; i<=45; i++)
{
scanf("%d", &vet[i]);
}
for(i=30; i<=45; i++)
printf(" %d ", vet[i]);
for(i=30; i<=45; i++)
printf(" %x", &vet[i]);
return 0;
}
I declared just 10 positions of int type on memory, but i get more, so what happened ?
it is a memory overflow ?
and the type %x is correctly to print the memory adress ?
the imput was:
1
2
3
4
5
6
7
8
9
10 /*It was to be stoped right here !?*/
11
12
13
14
15
16
and returned:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 /*I put space to indent*/
22ff6c 22ff70 22ff74 22ff78 22ff7c 22ff80 22ff84 22ff88 22ff8c 22ff90 22ff94 22ff98 22ff9c 22ffa0 22ffa4 22ffa8
The C language does not check bounds when you access arrays for reading or writing. It is up to the program author to ensure that the program accesses only valid array elements.
In this case, you wrote values to memory addresses outside your declared array. While you may sometimes get a segmentation violation (SIGSEGV) in this case, you may just get "lucky" -- really, unlucky -- and not encounter any problems at runtime.
C doesn't enforce array boundaries. Keeping within the limits is your responsibility in that language - it will let you do plainly wrong things, but it may crash at runtime.
Not only does the C language not check bounds on array accesses with respect to array size, which explains why you are successfully writing to the array 15 times, but C also does not have a mechanism for converting your range of 30 to 45 into the range of the first 10 (or 15?) elements of the array.
So, you are really attempting to write to the 31st through 46th element of the array vet, which has only 10 elements.
C is perfectly happy to let you read from and write to an array past the bounds you set (10, in this case).
Reading past the limit just gives you garbage; writing past it will do all kinds of crazy things and generally crash your program (or, if you are unlucky, overwrite your entire hard drive).
You were lucky with this program, but you should not keep doing that. In C, you are responsible for enforcing the limits of your arrays yourself.
int vet[10] declares a block of ten integers in memory. These memory locations are accessed via vet[0] through vet[9]. Any other access to memory through vet is undefined behavior. Absolutely anything could be within that memory, and you can easily corrupt the rest of your program execution. The compiler trusts you to know better than what you were doing.
As #NigelHarper correctly points out, %p is the official way of printing pointers. It prints in hexadecimal. Pointers could print in decimal, but the number itself is meaningless. Hexadecimal makes the printing more concise, and just as easy to see differences from one address to the next.
It is also possible to use %x for printing a pointer, since all that does is take a value and print it in hexadecimal form.
C does not do bounds checking on arrays and you are accessing an array out of bounds. The possible valid indexes in the array are [0,9], but you are accessing [30,45].
You should modify your code to only access valid indexes:
int SIZE = 10;
int vet[SIZE];
//...
// not for( i = 30; i <= 45; i++ )
for( i = 0; i < SIZE; ++i ) { /* ... */ }
C Language doesn't have support to check the out of bound array accesses. IN c++, if you try to access out of bound array memory location, it will generate Segmentation Fault which causes your process to terminate. As, C doesn't allow it, it is expected behavior.
#include <stdio.h>
int main(void) {
for (int i = 0; i < 5; i++) {
int arr[5] = {0};
// arr gets zeroed at runtime, and on every loop iteration!
printf("%d %d %d %d %d\n", arr[0], arr[1], arr[2], arr[3], arr[4]);
// overwrite arr with non-zero crap!
arr[0] = 3;
arr[1] = 5;
arr[2] = 2;
arr[3] = 4;
arr[4] = 1;
}
return 0;
}
Apparently this works:
> gcc -Wall -Wextra -pedantic -std=c99 -o test test.c;./test
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
But:
What's going on under the hood?
Is this guaranteed to work for arrays of any size?
Have I found the most elegant way of zeroing an array on every iteration of a loop?
You are creating a new array at each iteration, so there is no re-initialization.
This has been answered before:
how does array[100] = {0} set the entire array to 0?
What's going on under the hood?
The compiler will generate code equivalent to memset (see Strange assembly from array 0-initialization for full details).
Is this guaranteed to work for arrays of any size?
Up to the limit of your available stack size, yes.
Have I found the most elegant way of zeroing an array on every iteration of a loop?
I suppose so. :)
See according to me whenever you are iterating over a loop, then whatever variables you are declaring inside the loop are local to that loop. So at the end of the loop they are getting destroyed, and when the loop is again starting then they are being created again.
So,
1. answered above;
2. yes this should work for array of any size;
3. no...
The variable arr is created and initialized on every iteration of the loop because that's what you wrote. In your example, the code does something analogous to memset() to set the array to zero, but possibly more efficiently and inline. You could write a non-zero initializer too.
Yes, it works for 'any' size of array. If the arrays are bigger, the loop will be slower.
It is a very elegant way to achieve the effect. The compiler will do it as fast as possible. If you can write a function call that's quicker, the compiler writer's have goofed.
Yes, the C Standard guarantees that your array elememts are zeroed on each loop iteration. If you only want them zerored on the first iteration, make the array static, i.e. with
static int arr[5] = {0};
You may leave out the = { 0 } in this case, as static data is required to be initialized as if all elements were assigned 0 in the absence of an initializer (or NULL for pointers, or 0.0 for floating point values, recursively for structs).
Note also, that if you use less initializers than the array dimension specifies, the remaining elements are assigned zeros. So
int arr[5] = {42, 0xdead};
is equivalent to
int arr[5] = {42, 0xdead, 0, 0, 0};
and using a single 0 is just a special case of this rule.
Not ignoring the logical error of the code (re initialization at each iteration) I will recommend using memcpy or memset whenever you want to assign all the elements of an array.. This is the tried and tested way, which fortunately works as a charm in almost all the compiler implementations.
Memset Example
MemCpy Example
So, Using Memset your code will become;;
int a[5];
memset(a,0,<number of bytes to be set to> sizeof(int)*5);
Please read the documentation to find out why it is fast, you will enjoy...
EDIT
Sorry and Thanks for all the down voters / commentors... I corrected the answer...