How to disable timeout in LLDB? - lldb

In LLDB console, my process is stopped. I run thread step-in and eventually get:
Command timed out
How do I extend or disable this timeout?
In my case, this timeout is expected it because the program requires external interaction before going to the next line.

thread step-in has no timeout. That wouldn't make any sense, as your last comment demonstrates.
The print command can take a timeout, but by default does not. If you run po the object description printing part of that command is run with a timeout. And if you have any code-running variable formatters, they are also run with a timeout. lldb has removed most of the built-in code-running formatters, though there a few of them still around and they could also be responsible for the timeout message. But other than printing, there aren't really that many things lldb does with a timeout...
Anyway, what you are probably seeing is that after the previous stop happened some code was being run to present locals or something similar and that command was what timed out.
If you can get this to happen reliably, then please file a bug with http://bugreporter.apple.com.

Related

Asynchronous Put Commands with snowflake.connector.python Throws Error

When using the PUT command in a threaded process, I often receive the following traceback:
File "/data/pydig/venv/lib64/python3.6/site-packages/snowflake/connector/cursor.py", line 657, in execute
sf_file_transfer_agent.execute()
File "/data/pydig/venv/lib64/python3.6/site-packages/snowflake/connector/file_transfer_agent.py", line 347, in execute
self._parse_command()
File "/data/pydig/venv/lib64/python3.6/site-packages/snowflake/connector/file_transfer_agent.py", line 1038, in _parse_command
self._command_type = self._ret["data"]["command"]
KeyError: 'command'
It seems to be fairly benign, yet occurs randomly. The command itself seems to run successfully when looking at the stage. To combat this, I simply catch KeyErrors when puts occur, and retry several times. This allows processes to continue as expected, but leads to issues with proceeding COPY INTO statements. Mainly, because the initial PUT succeeds, I will receive a LOAD_SKIPPED status from the COPY INTO. Effectively, the file is put and copied, but we lose information such as rows_parsed, rows_loaded, and errors_seen.
Please advise on work arounds for the initial traceback.
NOTE: An example output after running PUT/COPY INTO processes: SAMPLE OUTPUT
NOTE: I have found I can use the FORCE parameter with COPY INTO to bypass the LOAD_SKIPPED status, however, the initial error still persists, and this can cause duplication.

Automatically connect to a CGI process and break in GDB before it exits?

For a C application accessed via CGI-BIN, documentation online for accessing the process and breaking in GDB relies on manipulating the source code (i.e. adding an infinite loop), in order for the process to be available long enough for a developer to attach, exit the loop, and debug.
Is it feasible that a tool could monitor the process list, and attach via GDB, immediately breaking in order for a developer to achieve this without requiring source code changes?
The rough structure of what I have in mind to develop is something along the lines of:
1. My process monitors the process list on the system.
2. A process matching the name of my application, and owner Apache appears in the list.
3. My process immediately performs a 'pgrep' and 'gdb -p' command, then sending a break-point command to pause the process.
4. The developer can then access the process and look at the flow of execution.
Is this feasible as an idea or not possible due to some constraints (i.e. a race condition which may not always be fufilled?)
Is this feasible
Sure: a trivial shell script will do:
while true; do
PID=$(pgrep my_app)
if [[ -n "$PID" ]]; then
gdb -p "$PID"
fi
done
a race condition
The problem is that between pgrep and gdb -p the application may make significant progress, or even run to completion.
The only way to avoid that is to intercept all execve system calls on the system, as Tom Tromey's preattach.stp does.

Calling StepOut() and EvaluateExpression() in immediate sequence

Calling StepOut() and then EvaluateExpression() in immediate sequence, for example from a script, does not return the expected value.
It does work when manually and separately calling these functions from the console:
(lldb) script lldb.thread.StepOut()
(lldb) script print lldb.frame.EvaluateExpression("$rax").description
However, it does not work when combining them into one statement:
(lldb) script lldb.thread.StepOut(); print lldb.frame.EvaluateExpression("$rax").description
This prints None to the console.
Checking the process's state shows that there's a difference between the two forms:
(lldb) script lldb.thread.StepOut()
(lldb) script print lldb.process.state
The state value is lldb.eStateStopped.
When running in sequence, the state immediately after StepOut is different:
(lldb) script lldb.thread.StepOut(); print lldb.process.state
Here the state is lldb.eStateRunning.
So the questions is:
How should code be written to ensure StepOut has fully completed? I'm assuming that requires the state to be back to stopped, and the frame to be initialized/setup, before calls to EvaluateExpression()?
The lldb SBDebugger can run in either synchronous or asynchronous mode.
In async mode, the commands that cause the debugee to run return as soon as it starts running. That's useful if you are planning control the whole debug session, handling events yourself, etc. There's an example of doing that here:
http://llvm.org/svn/llvm-project/lldb/trunk/examples/python/process_events.py
In synchronous mode, StepOut won't return till the debugee stops again. That mode is more convenient for one-off commands like the ones you show.
You can set the mode on the debugee using the "SBDebugger.SetAsync" call, passing True for async, and False for sync.

create process independent of bash

I have written a program which calculates the amount of battery level available in my laptop. I have also defined a threshold value in the program. Whenever the battery level falls below threshold i would like to call another process. I have used system("./invoke.o") where invoke.o is the program that i have to run. I am running a script which runs the battery level checker program for every 5 seconds. Everything is working fine but when i close the bash shell the automatic invocation of invoke.o is not happening. How should i make the invoke.o to be invoked irrespective of whether bash is closed or not??. I am using UBUNTU LINUX
Try running it as: nohup ./myscript.sh, where the nohup command allows you to close the shell without terminating the process.
You could run your script as a cron job. This lets cron set up standard input and output for you, reschedule the job, and it will send you email if it fails.
The alternative is to run a script in the background with all input and output, including standard error output, redirected.
While you could make a proper daemon out of your program that kind of effort is probably not necessary.
man nohup
man upstart
man 2 setsid (more complex, leads to longer trail of breadcrumbs on daemon launching).

How to detect pending system shutdown on Linux?

I am working on an application where I need to detect a system shutdown.
However, I have not found any reliable way get a notification on this event.
I know that on shutdown, my app will receive a SIGTERM signal followed by a SIGKILL. I want to know if there is any way to query if a SIGTERM is part of a shutdown sequence?
Does any one know if there is a way to query that programmatically (C API)?
As far as I know, the system does not provide any other method to query for an impending shutdown. If it does, that would solve my problem as well. I have been trying out runlevels as well, but change in runlevels seem to be instantaneous and without any prior warnings.
Maybe a little bit late. Yes, you can determine if a SIGTERM is in a shutting down process by invoking the runlevel command. Example:
#!/bin/bash
trap "runlevel >$HOME/run-level; exit 1" term
read line
echo "Input: $line"
save it as, say, term.sh and run it. By executing killall term.sh, you should able to see and investigate the run-level file in your home directory. By executing any of the following:
sudo reboot
sudo halt -p
sudo shutdown -P
and compare the difference in the file. Then you should have the idea on how to do it.
There is no way to determine if a SIGTERM is a part of a shutdown sequence. To detect a shutdown sequence you can either use use rc.d scripts like ereOn and Eric Sepanson suggested or use mechanisms like DBus.
However, from a design point of view it makes no sense to ignore SIGTERM even if it is not part of a shutdown. SIGTERM's primary purpose is to politely ask apps to exit cleanly and it is not likely that someone with enough privileges will issue a SIGTERM if he/she does not want the app to exit.
From man shutdown:
If the time argument is used, 5 minutes before the system goes down
the /etc/nologin file is created to ensure that further logins shall
not be allowed.
So you can test existence of /etc/nologin. It is not optimal, but probably best you can get.
Its a little bit of a hack but if the server is running systemd if you can run
/bin/systemctl list-jobs shutdown.target
... it will report ...
JOB UNIT TYPE STATE
755 shutdown.target start waiting <---- existence means shutting down
1 jobs listed.
... if the server is shutting down or rebooting ( hint: there's a reboot.target if you want to look specifically for that )
You will get No jobs running. if its not being shutdown.
You have to parse the output which is a bit messy as the systemctl doesnt return a different exit code for the two results. But it does seem reasonably reliable. You will need to watch out for a format change in the messages if you update the system however.
Making your application responding differently to some SIGTERM signals than others seems opaque and potentially confusing. It's arguable that you should always respond the same way to a given signal. Adding unusual conditions makes it harder to understand and test application behavior.
Adding an rc script that handles shutdown (by sending a special signal) is a completely standard way to handle such a problem; if this script is installed as part of a standard package (make install or rpm/deb packaging) there should be no worries about control of user machines.
I think I got it.
Source =
https://github.com/mozilla-b2g/busybox/blob/master/miscutils/runlevel.c
I copy part of the code here, just in case the reference disappears.
#include "libbb.h"
...
struct utmp *ut;
char prev;
if (argv[1]) utmpname(argv[1]);
setutent();
while ((ut = getutent()) != NULL) {
if (ut->ut_type == RUN_LVL) {
prev = ut->ut_pid / 256;
if (prev == 0) prev = 'N';
printf("Runlevel: prev=%c current=%c\n", prev, ut->ut_pid % 256);
endutent();
return 0;
}
}
puts("unknown");
see man systemctl, you can determine if the system is shutting down like this:
if [ "`systemctl is-system-running`" = "stopping" ]; then
# Do what you need
fi
this is in bash, but you can do it with 'system' in C
The practical answer to do what you originally wanted is that you check for the shutdown process (e.g ps aux | grep "shutdown -h" ) and then, if you want to be sure you check it's command line arguments and time it was started (e.g. "shutdown -h +240" started at 14:51 will shutdown at 18:51).
In the general case there is from the point of view of the entire system there is no way to do this. There are many different ways a "shutdown" can happen. For example someone can decide to pull the plug in order to hard stop a program that they now has bad/dangerous behaviour at shutdown time or a UPS could first send a SIGHUP and then simply fail. Since such a shutdown can happen suddenly and with no warning anywhere in a system there is no way to be sure that it's okay to keep running after a SIGHUP.
If a process receives SIGHUP you should basically assume that something nastier will follow soon. If you want to do something special and partially ignore SIGHUP then a) you need to coordinate that with whatever program will do the shutdown and b) you need to be ready that if some other system does the shutdown and kills you dead soon after a SIGHUP your software and data will survive. Write out any data you have and only continue writing to append-only files with safe atomic updates.
For your case I'm almost sure your current solution (treat all SIGHUPs as a shutdown) is the correct way to go. If you want to improve things, you should probably add a feature to the shutdown program which does a notify via DBUS or something similar.
When the system shuts down, the rc.d scripts are called.
Maybe you can add a script there that sends some special signal to your program.
However, I doubt you can stop the system shutdown that way.

Resources