May be this question is noobish but I am not well versed with unix environment and profiling.
I want to profile a server code written in C running on Ubuntu as a service (I start it with service command). Once it is started it listens for request and then performs some operation.
I am not able to understand how exactly to do the profiling using the tools like gprof, valgrind and sprof.
I have tried all three but not able to generate any log.
I tried valgrind but it just executes, doesn't wait for the actual request to come.
Used gprof and sprof but no files are being generated.
I looked at several examples on SO and other sites but they talk about a sample code which generates an executable which is then run.
I really need some help now.
Thanks
Related
I'm running a headless aarch64 based embedded linux system. on this system there is a main program running which is a compiled bit o C code. However I would like to be able to monitor this program while it continues doing its thing.
During my education I worked with the XCP-protocol to monitor systems and adjust them on the fly. But that is hands on, I would like to automate it with python or javascript so I can handle the data for other purposes.
So in short, there's a compiled bit of C-code running on a Linux system and I want to be able to see the value of its variables with something like javascript or python.
I did some testing with GDB but that seems to pause the execution of the program which can't happen, maybe only for a quick bit when its starting but not after.
I've found this previous post:
How can you debug a process using gdb without pausing it?
However when i type continue after setting all of the related settings that I could find, it doesnt let me look up the value of a variable anymore using a command like p .
I need to run a single curl command. The powers that be have decided this has to be done in C to avoid anyone being able to see the source code.
I don't know anything about C and have only little programming experience but I found you can use system() to execute a shell command like this:
int main()
{
system("/usr/bin/curl http://192.168.1.1");
return 0;
}
However, will there be a record/log anywhere in Linux (Ubuntu) that shows the full command my program executed?
Anyone with access to your account will easily find out whatever you are trying to hide (using strings or strace).
Someone without said access (hopefully the whole world except you and your sysadmin) can still use ps. It won't make any difference whether you use a C wrapper or run the command directly: in both cases ps will show the command and its arguments in all their glory.
Some commands, like sqlplus, manipulate their command line immediately after (although not exactly at) startup to hide e.g. password from prying eyes. curl does the same with usernames and passwords, but certainly not with URLs (which can be easily spied upon by network monitoring tools anyway)
On (post-3.2) linux, your sysadmin can (re-)mount /proc such that only root has access to your command lines, using the hidepid mount option
This will go a long way towards protecting your command lines - a C wrapper will only be useful for its placebo effect on your boss.
I need to run a single curl command. The powers that be have decided this has to be done in C to avoid anyone being able to see the source code.
You probably need to make a single HTTP request. I recommend taking time to read some HTTP specification, like RFC 2616 or newer. Read also about HTTP/2
Consider using (from your C code) the libcurl library for that (without using system(3) to run any curl(1) command). You need spend several days in reading about HTTP and libcurl.
Of course, someone could strace(1) your software (and find all details about your HTTP request and response), and understand the involved syscalls(2). See also credentials(7) and read Advanced Linux Programming and the documentation of GCC.
Learn also more about C, e.g. by reading n1570 and this C reference
Read also (and study documentation and source code) about the GDB debugger and ptrace(2).
Don't forget that the Linux kernel, the GCC compiler, the GDB debugger are all free software: you are allowed to download their source code, study it (it could take months), recompile that code (see LinuxFromScratch), and improve them.
There is no record/log of commands being executed by default. If you use bash, it has a history feature you can disable (with HISTSIZE=0). cron logs commands to syslog. ps will show processes that are currently running. If you run your c-program with ltrace or strace it will trivially tell you what it is doing. As will strings of the (stripped) binary.
I have been developing a multi-threaded server (using Pthreads) for a network for about 2 months now, under Linux (Ubuntu 11.04 64-bit kernel 2.6.38).
The code is about 7000 lines of C at the moment. I have been using it in the network where multiple clients connect to it and get served. It has been running quite smoothly.
Suddenly I am facing a bit of strange problem. Every now and then (about 1 out of 10 times) the server crashes due to segmentation fault. I have looked all over the code but can not seem to find the actual reason behind this. Can anyone guide me on this as to what may be going wrong here or what things I should try to find the actual bug?
Enable core file generation. When the application crashes, load up the debugger
run your application using valgrind with memory check
write unit tests. Lots of them, and increase coverage to 100%.
stress test your application using valgrind's hellgrind to test multithreaded applications
100% coverage isn't realistic, but 85%-95% can reasonably happen with diligence.
About why weird errors happen:
http://stromberg.dnsalias.org/~strombrg/checking-early.html
You said this started happening suddenly. Hopefully you've been using a source code control system like Mercurial or Git or SVN. If you have (or perhaps you have nightly backups?), you probably should look back at the changes made at about the time the problems started, trying to find the error, which is likely an undefined memory reference.
When an application causes a serious segment-fault issue, which is hard to find or track. I can use a debug version and generate a core dump file when issue happens. And debug this app with core-dump file.
But how to track down exceptional bugs in application when released? There seems to be no core-dump file in release version. Although log is an option, it is useless when there is a hard to track bugs happens.
So my question is how to track down those hard to track bugs in release version? Any suggestions or technology out there available?
Following reference may help the discussion.
[1] Core dump in Linux
[2] generate a core dump in linux
[3] Solaris Core dump analysis
You can compile a release version with gcc -g -O2 ...
The lack of core dump is related to your user's setting of resource limits (unless the application is explicitly calling setrlimit or is setuid; then you should offer a way to avoid that call). You might teach your users how to get core dumps (with the appropriate bash ulimit builtin).
(and there is some obscure way to put the debugging information outside of the executable)
The distributions provide -dbg packages that provide debugging symbols for programs. They are built along with the binary packages and can provide your users the ability to generate meaningful backtraces from core dumps. If you build your packages using the same utilities, you can get these -dbg packages for your own software "nearly free".
I suggest to use a crash reporting system, in my experience we use google's break-pad project for our windows client program, of course you can write your own.
Google break-pad is an open-source multi-platform crash reporting system, it can make mini or full memory dump when exception or crash happen, then you can config it to upload the dump file and any additional files to a specific ftp server or http server, very help to find bug.
Here is the link:
Google Break-pad
Ask the "customer" for a description of what he or she did to make it crash, and try to replicate it yourself with your own version that has debug information.
The hard part is getting correct information from the customer. Often they will say they did nothing special or nothing different than before. If possible, go see the person having the problem, and ask them to do what they do to make the program crash, writing down every step.
I have written an application in C, which runs as a Windows service. Most users can run the app without any problems, but a significant minority experience crashes caused by an Access Violation, so I know I have a bug somewhere. I have tried setting up virtual machines to mirror the users' configurations as closely as possible, but cannot reproduce the issue.
My background is in Java - when a Java app crashes it will produce a stack trace showing exactly where the problem occurred, but native applications aren't so helpful. What techniques are normally used by C developers for tracking down this type of problem? I have no physical access to the users' machines that experience the crash, but I could send then additional tools to install, to capture information. I also have Windows error reports showing Exception Code/Offset etc but these don't mean much to me. I have compiled my application using gcc - are there some compiler options I can use to generate more information in the event of a crash?
You could try asking the users to run ProcDump to capture a core dump when the program crashes. Unlike using something like Visual Studio it's a single, simple command-line utility so there should be no problem getting the users to run it.
On most modern operating systems your app can install a crash handler that'll walk the stack(s) in the event of a crash. I have no experience doing this on Windows, but this article walks through how to do it.