GDB reports "no symbol in current context" upon array initialization - c

I am attempting to initialize an array with a size of ceil(buflen/125.0) as follows:
long long maxjpg = ceil(buflen/125.0);
long long arr[maxjpg];
I do not receive a compiler error, but GDB reports "no symbol 'arr' in current context". The only fix I have found is by hardcoding a numerical value into the array size like so:
long long arr[5];
I have tried casting, using different variable types, using const and any combination of these approaches. I know that ceil returns a double, I have tried working with that too.
Initializing a value and printing it like so works:
arr[0] = 25;
printf(pos 0 is %d\n", arr[0]);
output: pos 0 is 25
Printing arr[0] through GDB after that modification results in "value has been optimized out".
Minimum viable code to reproduce:
#include <math.h>
int main(void){
long long size = ceil(123.45);
long long arr[size];
return 0;
}
GDB Fedora 7.4.50.20120120-52.fc17

VLAs do not currently work in gdb. There's a bug open about it and an ongoing project to fix it: https://sourceware.org/gdb/wiki/VariableLengthArray
There's an implementation in archer.git that works in some cases, but it isn't considered good enough to go in trunk.

Related

Matrix not zero-filled on declaration

I was trying to debug my code in another function when I stumbled upon this "weird" behaviour.
#include <stdio.h>
#define MAX 20
int main(void) {
int matrix[MAX][MAX] = {{0}};
return 0;
}
If I set a breakpoint on the return 0; line and I look at the local variables with Code::Blocks the matrix is not entirely filled with zeros.
The first row is, but the rest of the array contains just random junk.
I know I can do a double for loop to initialize manually everything to zero, but wasn't the C standard supposed to fill this matrix to zero with the {{0}} initializer?
Maybe because it's been a long day and I'm tired, but I could've sworn I knew this.
I've tried to compile with the different standards (with the Code::Blocks bundled gcc compiler): -std=c89, -std=c99, std=c11 but it's the same.
Any ideas of what's wrong? Could you explain it to me?
EDIT:
I'm specifically asking about the {{0}} initializer.
I've always thought it would fill all columns and all rows to zero.
EDIT 2:
I'm bothered specifically with Code::Blocks and its bundled GCC. Other comments say the code works on different platforms. But why wouldn't it work for me? :/
Thanks.
I've figured it out.
Even without any optimization flag on the compiler, the debugger information was just wrong..
So I printed out the values with two for loops and it was initialized correctly, even if the debugger said otherwise (weird).
Thanks however for the comments
Your code should initialize it to zero. In fact, you can just do int matrix[MAX][MAX] = {};, and it will be initialized to 0. However, int matrix[MAX][MAX] = {{1}}; will only set matrix[0][0] to 1, and everything else to 0.
I suspect what you are observing with Code::Blocks is that the debugger (gdb?) is not quite showing you exactly where it is breaking in the code - either that or some other side-effect from the optimizer. To test that theory, add the following loop immediately after the initialization:
``` int i,j;
for (i = 0; i < MAX; i++)
for (j = 0; j < MAX; j++)
printf("matrix[%d][%d] = %d\n", i, j, matrix[i][j]);
```
and see if what it prints is consistent with the output of the debugger.
I am going to guess that what might be happening is that since you are not using matrix the optimizer might have decided to not initialize it. To verify, disassemble your main (disass main in gdb and see if the matrix is actually being initialized.

Overflowing Buffer by specific integer

So If I had this code:
int main() {
char buff[20];
int x = 0;
gets(buff);
if (x==1337) {
print("Congrats");
}
printf("%d\n", x);
return 0;
}
Knowing that this is written in C (not that it matters too much), how would one go about overflowing this char[] by exactly 1337?
And also, I don't really understand the output I'm getting.. For example, if I run this code and input:
12345678901234567890a
The output is:
0
In fact, I have to overflow my char[] by an additional 8 characters past my array size before the data leaks into the value for X. I have no idea why this is, and would really like some help if somebody understand that.. However, it doesn't stop there.
If my input is:
1234567890123456789012345678aa
My output is:
24929
This throws me for quite a twirl.. Since I would've expected it to either overflow by the char value of a+a or maybe even a*a, alas it is neither. (the char value of a is 97).
So to sum it up, since I now it is probably confusing.. I would like to know how to overflow a buffer (in C) into an int, leaving the int a very specific value (in this case 1337). And in addition, if you could explain how these numbers are coming out, I would greatly appreciate it! By the way, it might help to mention the code is being executed on a Linux shell.
Since this is probably homework, rather than give you a direct answer, what I'm going to do is show you the C program that does what you thought your program would do, and how it behaves when fed over-length input.
#include <stdio.h>
int main(void)
{
struct {
char buff[20];
volatile unsigned int x;
} S;
S.x = 0;
gets(S.buff);
if (S.x == 1337)
puts("Congrats");
else
printf("Wrong: %08x\n", S.x);
return 0;
}
The key difference is that I am forcing the compiler to lay out the stack frame a particular way. I am also printing the value of x in hexadecimal. This makes the output both more predictable, and easier to interpret.
$ printf 'xxxxxxxxxxxxxxxxxxxxABCD' | ./a.out
Wrong: 44434241
$ printf 'xxxxxxxxxxxxxxxxxxxxABC' | ./a.out
Wrong: 00434241
$ printf 'xxxxxxxxxxxxxxxxxxxxAB' | ./a.out
Wrong: 00004241
$ printf 'xxxxxxxxxxxxxxxxxxxxA' | ./a.out
Wrong: 00000041
Everything that confused you is caused by one or more of: the compiler not laying out the stack frame the way you expected, the ordering of bytes within an int not being what you expected, and the conversion from binary to decimal obscuring the data in RAM.
Exercise for you: Compile both my program and your original program in -S mode, which dumps out the generated assembly language. Read them side by side. Figure out what the differences mean. That should tell you what input (if any -- it might not be possible!) will get your program to print 'Congrats'.

Received MAC address of available WiFi networks are not true

I am trying to find MAC address of available "Wi_Fi"s in this area but I receive wrong MAC address( at least I am sure about 1 access point MAC address here that I know is not the same with thing I receive) .
My code is:
char MAC[64];
int len=sizeof(MAC)/sizeof(int);
int i;
for(i=1;i<len;i++){
MyScanResults = WFScanList(i);
//unsigned long long testMac =MyScanResults.bssid[i];
unsigned char* pTestMac = (unsigned char*)&MyScanResults.bssid[i];
sprintf(MAC, "%02x:%02x:%02x:%02x:%02x:%02x",
(unsigned)pTestMac[6],
(unsigned)pTestMac[5],
(unsigned)pTestMac[4],
(unsigned)pTestMac[3],
(unsigned)pTestMac[2],
(unsigned)pTestMac[1]
);
and my expected answer is:
bssid: 00:12:17:C6:F4:36
but each time I receive some addresses like this and some times this address change also:
MAC: 73:6D:65:36:F4:C6
I have changed also order of numbers but nothing...
is there anyone to tell me where is my problem?
thanks
Regards
Your code doesn't make a lot of sense.
You call MyScanResults = WFScanList(i); before even declaring i. Also, the looping and indexing from 1 is very suspect.
I also think the use of i is very strange throughout, the calculation of a pointer into MyScanResults.bssid, effectively slicing it, can't be right.
I think your loop should be something like:
for(i=0; i < WFNetworkFound; i++)
{
const tWFNetwork myScanResults = WFScanList(i);
sprintf(MAC, "%02x:%02x:%02x:%02x:%02x:%02x",
myScanResult.ssid[0],
myScanResult.ssid[1],
myScanResult.ssid[2],
myScanResult.ssid[3],
myScanResult.ssid[4],
myScanResult.ssid[5]);
This assumes you've run the scan already so that the global variable WFNetworkFound has been updated. It also assumes that you're using openPicus, so that this reference code from which I picked up a thing or two is valid.

Maximum size array program in C?

with the following code, I am trying to make an array of numbers and then sorting them. But if I set a high arraysize (MAX), the program stops at the last 'randomly' generated number and does not continue to the sorting at all. Could anyone please give me a hand with this?
#include <stdio.h>
#define MAX 2000000
int a[MAX];
int rand_seed=10;
/* from K&R
- returns random number between 0 and 62000.*/
int rand();
int bubble_sort();
int main()
{
int i;
/* fill array */
for (i=0; i < MAX; i++)
{
a[i]=rand();
printf(">%d= %d\n", i, a[i]);
}
bubble_sort();
/* print sorted array */
printf("--------------------\n");
for (i=0; i < MAX; i++)
printf("%d\n",a[i]);
return 0;
}
int rand()
{
rand_seed = rand_seed * 1103515245 +12345;
return (unsigned int)(rand_seed / 65536) % 62000;
}
int bubble_sort(void)
{
int t, x, y;
/* bubble sort the array */
for (x=0; x < MAX-1; x++)
for (y=0; y < MAX-x-1; y++)
if (a[y] > a[y+1])
{
t=a[y];
a[y]=a[y+1];
a[y+1]=t;
}
return 0;
}
The problem is that you are storing the array in global section, C doesn't give any guarantee about the maximum size of global section it can support, this is a function of OS, arch compiler.
So instead of creating a global array, create a global C pointer, allocated a large chunk using malloc. Now memory is saved in the heap which is much bigger and can grow at runtime.
Your array will land in BSS section for static vars. It will not be part of an image but program loader will allocate required space and fill it with zeros before your program starts 'real' execution. You can even control this process if using embedded compiler and fill your static data with anything you like. This array may occupy 2GB or your RAM and yet your exe file may be few kilobytes. I've just managed to use over 2GB array this way and my exe was 34KB. I can believe a compiler may warn you when you approach maybe 231-1 elements (if your int is 32bit) but static arrays with 2m elements are not a problem nowadays (unless it is embedded system but I bet it is not).
The problem might be that your bubble sort has 2 nested loops (as all bubble sorts) so trying to sort this array - having 2m elements - causes the program to loop 2*1012 times (arithmetic sequence):
inner loop:
1: 1999999 times
2: 1999998 times
...
2000000: 1 time
So you must swap elements
2000000 * (1999999+1) / 2 = (4 / 2) * 10000002 = 2*1012 times
(correct me if I am wrong above)
Your program simply remains too long in sort routine and you are not even aware of that. What you see it just last rand number printed and program not responding. Even on my really fast PC with 200K array it took around 1minute to sort it this way.
It is not related to your os, compiler, heaps etc. Your program is just stuck as your loop executes 2*1012 times if you have 2m elements.
To verify my words print "sort started" before sorting and "sort finished" after that. I bet the last thing you'll see is "sort started". In addition you may print current x value before your inner loop in bubble_sort - you'll see that it is working.
Dynamic Array
int *Array;
Array= malloc (sizeof(int) * Size);
The original C standard (ANSI 1989/ISO 1990) required that a compiler successfully translate at least one program containing at least one example of a set of environmental limits. One of those limits was being able to create an object of at least 32,767 bytes.
This minimum limit was raised in the 1999 update to the C standard to be at least 65,535 bytes.
No C implementation is required to provide for objects greater than that size, which means that they don't need to allow for an array of ints greater than
(int)(65535 / sizeof(int)).
In very practical terms, on modern computers, it is not possible to say in advance how large an array can be created. It can depend on things like the amount of physical memory installed in the computer, the amount of virtual memory provided by the OS, the number of other tasks, drivers, and programs already running and how much memory that are using. So your program may be able to use more or less memory running today than it could use yesterday or it will be able to use tomorrow.
Many platforms place their strictest limits on automatic objects, that is those defined inside of a function without the use of the 'static' keyword. On some platforms you can create larger arrays if they are static or by dynamic allocation.

Constants in C pros/cons [duplicate]

At the end of the article here: http://www.learncpp.com/cpp-tutorial/45-enumerated-types/, it mentions the following:
Finally, as with constant variables, enumerated types show up in the debugger, making them more useful than #defined values in this regard.
How is the bold sentence above achieved?
Thanks.
Consider this code,
#define WIDTH 300
enum econst
{
eWidth=300
};
const int Width=300;
struct sample{};
int main()
{
sample s;
int x = eWidth * s; //error 1
int y = WIDTH * s; //error 2
int z = Width * s; //error 3
return 0;
}
Obviously each multiplication results in compilation-error, but see how the GCC generates the messages for each multiplication error:
prog.cpp:19: error: no match for
‘operator*’ in ‘eWidth * s’
prog.cpp:20: error: no match for
‘operator*’ in ‘300 * s’
prog.cpp:21: error: no match for
‘operator*’ in ‘Width * s’
In the error message, you don't see the macro WIDTH which you've #defined, right? That is because by the time GCC makes any attempt to compile the line corresponds to second error, it doesn't see WIDTH, all it sees only 300, as before GCC compiles the line, preprocessor has already replaced WIDTH with 300. On the other hand, there is no any such thing happens with enum eWidth and const Width.
See the error yourself here : http://www.ideone.com/naZ3P
Also, read Item 2 : Prefer consts, enums, and inlines to #defines from Effective C++ by Scott Meyers.
enum is compile time constant with debug info with no storage allocation.
const is allocated with a storage, depending on whether it is optimised away by the compiler with constant propagation.
#define has no storage allocation.
#define values are replaced by the pre-processor with the value they are declared as, so in the debugger, it only sees the value, not the #defined name, e.g. if you have #define NUMBER_OF_CATS 10, in the debugger you'll see only 10 (since the pre-processor has replaced every instance of NUMBER_OF_CATS in your code with 10.
An enumerated type is a type in itself and the values are constant instances of this type, and so the pre-processor leaves it alone and you'll see the symbolic description of the value in the debugger.
The compiler stores enum information in the binary when the program is compiled with certain options.
When a variable is of a enum type, a debugger can show the enum name. This is best shown with an example:
enum E {
ONE_E = 1,
};
int main(void)
{
enum E e = 1;
return 0;
}
If you compile that with gcc -g you can try out the following in gdb:
Reading symbols from test...done.
(gdb) b main
Breakpoint 1 at 0x804839a: file test.c, line 8.
(gdb) run
Starting program: test
Breakpoint 1, main () at test.c:7
7 enum E e = 1;
(gdb) next
9 return 0;
(gdb) print e
$1 = ONE_E
(gdb)
If you used a define, you would not have a proper type to give e, and would have to use an integer. In that case, the compiler would print 1 instead of ONE_E.
The -g flag asks gdb to add debugging information to the binary. You can even see that it is there by issuing:
xxd test | grep ONE_E
I don't think that this will work in all architectures, though.
At least for Visual Studio 2008 which I currently have at hand, this sentence is correct. If you have
#define X 3
enum MyEnum
{
MyX = 3
};
int main(int argc, char* argv[])
{
int i = X;
int j = (int)MyX;
return 0;
}
and you set a breakpont in main, you can hover your mouse over "MyX" and see that it evaluates to 3. You do not see anything useful if you hover over X.
But this is not a language property but rather IDE behavior. Next versions might do it differently, as well as others IDEs. So just check it out for your IDE to see if this sentence applies in your case.
I am answering too late but i feel i can add something -
enum vs. const vs. #define
enum -
Does not require assining values (if just want to have sequential values 0, 1, 2..) whereas in case of #defines you manually need to manage values which could cause human error sometime
It works just as variable to during online debugging the value of enum can be watched in watch window
You can have a variable of enum type to which you can assign an enum
typedef enum numbers
{
DFAULT,
CASE_TRUE,
CASE_OTHER,
};
int main(void)
{
numbers number = CASE_TRUE;
}
const -
It is constant stored in read only area of memory but can be accessed using address which is not possible in case of #define
You have type check in your hand if you use const rather than #define
defines are pre-processing directive but const is compile time
for example
const char *name = "vikas";
You can access the name and use its base address to read the such as vikas[3] to read 'a' etc.
#defines -
are dumb preprocessor directives which does textual replacement
Check the following article, nice summary
http://www.queryhome.com/26340/define-vs-enum-vs-constant

Resources