AIX Core dump not creating when running under SRC - core

If i run the C program it is creating a core dump file. but the same application if i run it under src, it is not creating core dump.
I tried setting ulimit but still same problem. What could be causing the service to fail to create core dump.

Perhaps it is in another directory? Check the output of errpt -a to see if a core dump is being logged.
It also might not have permission to write into its current working directory when it tries to core dump.

Related

Deploying Native C/C++ Binary as a standalone Microservice in PCF-DEV

I have a use case to deploy a compiled Native C executable as a Microservice on PCF:
The compiled executable is called like so:
"mycbinary inputfile outputfile"
and terminates after the operation. The Microservice is thus not an LRP.
It is possibly a Task in PCF palance, but it does not rely on the existence of other Microservices.
It must be a standalone Microservice but not a long-running one.
How can I achieve this use case with PCF please, i.e what possibilities do I have?
The Microservice terminates when the binary is done with its work until it is needed again.
To test the feasibility of what I could do, I tried pushing some compiled C code to PCF-DEV.
I am using cf push since that's my understanding of a standalone App on PCF
cf push HelloServiceAgain -c './helloworld' -b https://github.com/cloudfoundry/binary-buildpack.git -u process --no-route
The push crashed with the following message:
Waiting for app to start...
Start unsuccessful
TIP: use 'cf.exe logs HelloService --recent' for more information
In the log file there was this entry:
OUT Process has crashed with type: "web"
Then I pushed another command which expects parameters. This started without a problem, but the same message in the log file
cf push HelloServiceGCC -c 'gcc -o ./hellogcc ./hello1.c' -b https://github.com/cloudfoundry/binary-buildpack.git -u process --no-route
I have the following additional questions please:
1) Is the message "Process has crashed with type: "web" an ERROR? And why is the command called multiple times?
2) The second push which succeeded is supposed to create a compiled hellogcc executable which I expect to see in the same root directory. Where is the output file created and how can I access it from the local file system?
Sorry for asking so many questions but I am a newbie in the PCF business.
The Microservice is thus not an LRP. It is possibly a Task in PCF palance, but it does not rely on the existence of other Microservices. It must be a standalone Microservice but not a long-running one.
It's definitely a task. It's fine that it does not rely on other services. A task is just simple a process that runs for a finite amount of time and exits 0 for success or something else for error.
cf push HelloServiceAgain -c './helloworld' -b https://github.com/cloudfoundry/binary-buildpack.git -u process --no-route
I would recommend using this slight variation: cf push HelloServiceAgain -c 'sleep 5000' -b binary_buildpack -u process --no-route.
This will...
Assume that the compiled binary is in the current directory.
Uses the system provided binary buildpack which should be fine.
Sets the health check to be based on the process & sets no route.
Runs sleep, which is merely to pass the health check.
The purpose of this is so that your application will upload & stage. We set the command to sleep because we just need the app to stage and start (this is a workaround to make sure that staging occurs, you have to run once to trigger staging at least at the moment). Once the app starts, just run cf stop and stop the app. Since all you have is the task, you don't need the app to continue running. If you update your app, you do need to follow this process again though, as that will restage your changes.
At this point, you can cf run-task and execute your program. Ex: cf run-task app-name './helloworld'.
Hope that helps!

Backtrace files and core files in Cavium-Octeon

I am exploring information saved when a core hangs as in the following example:
user.emerg gs_app_main[1075]:
10#173805766276886: * Begining crash dump for core 10
10#173805773984802: Num cores left running 30 on coremask 0xfffffbfe *
10#173805784192440: Core 10: Unhandled Exception. Cause register decodes to: address exc, load/fetch
I've searched the file system for backtrace* and core files. I've discovered gcc can be used to generate a traceback but the application hardware does not include gcc in the Linux distribution. Also, I find files with the name core* but not sure which are significant.
Thank you in advance for any tips.
Regards,
Dale
OCTEON's Simple-Exec applications running bare-metal, don't have the
ability to generate core or backtrace (saved as file).
Simple-Exec applications running in Linux user-space, can generate
core. Although the capture and save will depend on a number of
factors.
If the core generation & capture is successful, then you will find the core file in the launch directory. You will have to use OCTEON gdb to examine the core file.
In both cases, a traceback may be generated and spit out onto the Serial console, or reported to system log.
If you have multiple *core files, then obviously the latest ones, or the ones corresponding to the crash time, are the relavant ones.
Remember, you will have to use OCTEON native gdb on Target, or OCTEON-cross-built gdb on x86 HOST to examine the core files.

How do I configure ABRT to store core files for my unpackaged programs in the current working directory?

I'm using Fedora 25 which uses abrt to manage my core dumps. Following the documentation I've set "ProcessUnpacked" to "yes", and I can see my corefiles when a program I'm maintaining coredumps. Unfortunately those cores are stored in /var/spool/abrt, which is unsatisfactory to me for a variety of reasons.
I would like to configure abrt to store core files (or the entire coredump info directory) in the current working directory, when it detects that it is processing an unpackaged program. Can someone tell me how to do this? If there's anything special I need to know to keep selinux happy, I'd appreciate that info as well.
I'd actually recommend instead configuring your system to use coredumpctl. See https://fedoraproject.org/wiki/Changes/coredumpctl for the plan to make this the default in Fedora 26. Making this the default on your system now is easy:
sudo systemctl disable --now abrt-ccpp.service
sudo systemctl enable --now abrt-journal-core.service
You may find the coredumpctl management tool to be convenient. If you don't want this at all, disable both of the services above and replace the file /usr/lib/sysctl.d/50-coredump.conf with a symlink to /dev/null. (And/or otherwise set /proc/sys/kernel/core_pattern to a filename, like the default core.)

How to capture List of Processes that have received SIGSEGV

Part of my application (preferably a daemon) required to log the list of process names that have got core dumped. It would be great, if someone points which mechanism that I can use?
If the processes are truly dumping core, you could use the following trick:
Set /proc/sys/kernel/core_pattern to |/absolute/path/to/some/program %p %e
This will cause the system to execute your program (with the faulting process' pid and executable name), and pipe the core dump into its standard input. You may then log and store the core dump file.
Note that the program will run as the user and group root
See man 5 core for more information, and an example core dump handling program

How Auto Bug Report Tool (ABRT) works in order to catch cores at the runtime?

My fedora12 installed a tool called ABRT that comes probably with GNOME. This
tool operates at the background and reports at realtime any process that has crashed.
I have used a signal handler that was able to catch a SIGSEGV signal, ie it could report
crashed.
What other ways exist in order a process to get information about the state (especially a core) of an other process without having parent-child connection?
Any ideas? It seems a very interesting issue.
ABRT is open source, after all, so why not look at their code. The architecture is explained here -- it looks like they monitor $COREDUMPDIR to detect when a new core file appears.
Your question is not entirely clear, but it is possible to get a core of a running process using gcore:
gcore(1) GNU Tools gcore(1)
NAME
gcore - Generate a core file for a running process
SYNOPSIS
gcore [-o filename] pid
DESCRIPTION
gcore generates a core file for the process specified by its process
ID, pid. By default, the core file is written to core.pid, in the cur‐
rent directory.
-o filename
write core file to filename instead of core.pid

Resources