This question already has answers here:
How can I see the output of an OS X program being run via the Time Profiler in Instruments?
(2 answers)
Closed 3 months ago.
I'm using Xcode on OSX to develop command line C applications. I would also like to use Instruments to profile and find memory leaks.
However, I couldn't find a way to display the console when launching the application from within Instruments. I'm also unable to attach to a running command line process (it exits with an error):
Here's an example code:
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <setjmp.h>
static sigjmp_buf jmpbuf;
void handler(int sig) {
char c[BUFSIZ];
printf ("Got signal %d\n", sig);
printf ("Deseja sair? (s/n) ");
fgets(c, sizeof(c), stdin);
if(c[0] == 's') {
exit(0);
} else {
siglongjmp(jmpbuf, 1);
}
}
int main(void) {
char buf[BUFSIZ];
signal(SIGINT, handler);
sigsetjmp(jmpbuf, 1);
while(1) {
printf(">>>");
fgets(buf, sizeof(buf), stdin);
printf ("Introduziu: %s\n", buf);
}
return(0);
}
Here's the error I got after launching Instruments, and trying to attach to the running process in xcode:
[Switching to process 1475]
[Switching to process 1475]
Error while running hook_stop:
sharedlibrary apply-load-rules all
Error while running hook_stop:
Invalid type combination in ordering comparison.
Error while running hook_stop:
Invalid type combination in ordering comparison.
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Unable to disassemble __CFInitialize.
Any thoughts?
It's easy. See the screenshot.
It's a little late to contribute to this old thread, however I have found the best way of profiling a command line utility is to use iprofiler (manpage). This allows data to be collected from the command line simply by adding this to the start of the command line:
iprofiler -leaks -d $HOME/tmp
(I have a private temporary directory at $HOME/tmp, so you might need to use /tmp or leave the -d command line option off altogether).
My test scripts automatically add that to the command line if $FINDLEAKS is defined (and will prepend valgrind if running under Linux).
This then generates a .dtps file (actually a directory) which can be loaded and anaylysed using Instruments.
If you are compiling using clang then simply add both -O3 and -g (clang doesn't support the -pg command line option).
You can change the output in the Options dropdown when choosing your target. The output will appear in the system Console (Applications/Utilities/Console).
Related
Here is my very simple program that I am trying to debug with cgdb. Problem is once I get to the "scanf" line, it prompts for an input, but once I press enter after entering the input (2 in the example below) it seems to enter into an infinite loop. It works fine in gdb though.
#include <cstdio>
using namespace std;
int main()
{
int n;
scanf("%d", &n);
printf("%d\n", n);
return 0;
}
Here is the execution trace in terminal:
Type "apropos word" to search for commands related to "word"...
Reading symbols from test...done.
(gdb) start
Temporary breakpoint 1 at 0x400585: file test.cpp, line 7.
Starting program: /home/Alex/Desktop/test
Temporary breakpoint 1, main () at test.cpp:7
(gdb) next
2 (this is my input)
Infinite loop starts here.
According to the info page of cgdb, you need to either:
start the program on one terminal, and attach to it with CGDB from another terminal
or pass arguments using the tty window
To invoke the tty window, hit 'T' in command mode (escape)
Extracted from the info page:
Sending I/O to the program being debugged
This technique is similar to getting in and out of "GDB mode". The tty window is not
visible by default. This is because it is only needed if the user
wishes to send data to the program being debugged. To display the tty
window, hit `T' while in command mode.
I am invoking make from my C program, which intern executes another program. I am redirecting both the standard out and standard error to a file. However, when the program run by make terminates due to segmentation fault, a core dump is generated and printed to the console (standard out) of the main program that is invoking make.
How can I get around this and not have the core dump show on the console?
The following is my code to invoke make :
int pid = fork();
if(pid==0){
dup2(make_logs, 1);
dup2(make_logs, 2);
close(make_logs);
execvp (args[0],args);
}
Where make_logs is the file opened using 'open'
Thanks
I would try to fix the core dump rather than suppressing the message, but the message about the segmentation fault is being generated by the shell (which detects the exit value of the child and recognize a core dump situation), so you can suppress it by installing your own program that handles the fork() and wait() rather than having the shell do the work.
To suppress the core dump, just use limit coredumpsize 0.
Sample of suppression (sloppy code; you should really be checking for errors):
#include <sys/types.h>
#include <sys/wait.h>
main( int argc, char **argv )
{
int pid;
if( (pid = fork() ) > 0 ) wait( 0 );
else if( pid == 0 ) {
execl( "program-that-cdumps", "program-that-cdumps", 0 );
perror("failed in execl");
} else perror("failed in fork");
}
Read core(5) and signal(7) man pages.
Compile all your programs with gcc -Wall -g. Then use
file core
to understand which binary dumped the core. It probably says something like core dump from foo to tell you that program foo dumped the core. Then, start a post mortem debugger on it:
gdb foo core
and use the common gdb commands (notably bt to backtrace, p to print, etc...).
The message dumped core is given by some shell (or perhaps by make when it is acting like a shell). I don't think that the core file is output to stdout (it is a big binary file).
If you wish to avoid the core (which IMHO is a bad idea, a core dump is a good symptom of something wrong), you could call the setrlimit(2) syscall with RLIMIT_CORE and a 0 limit after your fork and before the execvp. I believe you should not do that (or at least have some way of configuring that setrlimit is not called: sometimes you really need the core dump to debug the problem).
You should fix the problem which gives the core dump, not try to avoid the dumped core message!
If you run make on a user provided Makefile so that the core dump is from a user program, you really want to keep the user informed that a core did happen, so you should keep the core dumped message.
I'm using gcov to collect code coverage data for a C project I'm working on. I understand that gcov dumps the code coverage data once the program exits after completion. How do I collect gcov data for long running processes. (say, my program is the kernel of an operating system which runs in a server that never shuts down - and I need to collect code coverage data for it). Is there any way to make gcov dump code coverage data periodically (say, every 1 hour) or upon certain event - how can I trigger gcov dump code coverage data (rather than waiting for gcov to do it after the program terminates)?
Call __gcov_flush() periodically.
This can be done by associating a signal handler:
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
void __gcov_flush();
static void catch_function(int signal) {
__gcov_flush();
}
int main(void) {
if (signal(SIGINT, catch_function) == SIG_ERR) {
fputs("An error occurred while setting a signal handler.\n", stderr);
return EXIT_FAILURE;
}
while(1);
}
Compile regularly: gcc sig.c -ftest-coverage -fprofile-arcs
Then toggle (periodic) update by kill -2 process_id
When I run programs with segfaults I get an error message Segmentation fault: 11. For some reason, I'm not getting the (core dumped) message. I tried running the shell command ulimit -c unlimited, but I still get the same error and it doesn't say core dumped. I'm new to GDB so I tried it with a simple program:
/* coredump.c */
#include <stdio.h>
int main (void) {
int *point = NULL;
*point = 0;
return 0;
}
But when I compile using:
gcc coredump.c -g -o coredump
And run it, it still says segfault: 11
Is it still creating a core dump somewhere I don't know about? I want to be able to use gdb coredump core.
Look at this link:
How to generate a core dump in Linux when a process gets a segmentation fault?
Options include:
ulimit -c unlimited (default = 0: no core files generated)
the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
"man core" for other options
find / -name core -print 2> /dev/null to search your filesystem for core files
I presume you're running Linux, and I presume you're executing the .exe in a directory where you have write permissions.
So my top two guesses would be 1) "ulimit -c unlimited" isn't getting set, or is being overridden, or 2) the core files are being generated, but going "somewhere else".
The above suggestions should help. Please post back what you find!
If you're running the program that crashes from the shell, then you should follow the guidelines in Apple's Tech Note TN2124, which I found out about in in the answer to SO2207233.
There are a few key points:
You need to set ulimit -c unlimited in bash (same effect, different command in tcsh).
You need to set the permissions on the /cores directory so that you can create files in it. The default permissions are 1775; you need 1777. The 1 indicates the sticky bit is set.
The core dumps are then created in /cores suffixed with a PID (/cores/core.5312, for example).
If you want programs launched graphically to dump core when they crash, then you need to create /etc/launchd.conf if it does not already exist, and add a line limit core unlimited to the file. Again, see the information in the Tech Note for more details.
Watch it; core dumps are huge! Consider this not very complicated or big program:
#include <stdio.h>
int main(void)
{
int *i = 0;
int j = 0;
printf("i = %d, j = %d, i / j = %d\n", *i, j, *i / j);
return 0;
}
The core dump from this is nearly 360 MB.
Using gcc, if you add the flags:
gcc -g -dH
you should be able to generate a core dump
The -g flag produces some debugging information to use with gdb, and the -dH flag produces a core dump when there is an error
Sometimes core files are not store in current directory and may follow a different naming rule
sysctl -a | grep kern.core
may give hints to where your core files are stored
How can I dump the core when my program receives the SIGSEGV signal ? (The server that runs my program has very limited permissions and therefore core dump is disabled by default.)
I have written the following using gcore but I would like to use C functions instead. Can I somehow catch the core and write it to a folder somewhere ?
void segfaulthandler(int parameter)
{
char gcore[50];
sprintf(gcore, "gcore -s -c \"core\" %u", getpid());
system(gcore);
exit(1);
}
int main(void)
{
signal(SIGSEGV, segfaulthandler);
}
Unless there's a hard limit preventing you, you could use setrlimit(RLIMIT_CORE, ...) to increase the softlimit and enable coredumps - this corresponds to running ulimit -c in shell.
On linux, you typically can do:
$ ulimit -c unlimited
The resulting core file will be written in the current working directory of the process when the signal is received.