in snowflake Pipes can be suspended and resumed ? True or False - snowflake-cloud-data-platform

I am confused as snowflake documentation says Snow pipes can be Paused and Resume. So Shall I take it as True.. Can someone help me to come with conclusion as Pause and Suspend are they similar in snowflake or they are different

Yes, pipes can be paused:
ALTER PIPE … SET PIPE_EXECUTION_PAUSED = true
and resumed:
ALTER PIPE … SET PIPE_EXECUTION_PAUSED = false
To verify that the pipe is running, we can query the SYSTEM$PIPE_STATUS function again and verify that the pipe execution state is RUNNING.

Related

propogating signal to child processes

I have a collection of processes that I'm starting from a shell script as follows:
#!/bin/bash
#This is a shortcut to start multiple storage services
function finish {
alljobs=$(jobs -p)
if [ ! -z "$alljobs" ]; then
kill $alljobs >/dev/null 2>&1
else
echo "Entire trio ceased running"
fi
}
trap finish EXIT
./storage &
P1=$!
./storage &
P2=$!
./storage &
P3=$!
wait $P1 $P2 $P3
Currently, it executes how I want it to, in that when I send a ctrl+c signal to it, the script sends that signal to all my background processes.
However: I've now extended these programs so that, based on connections/messages they receive from clients, they may do an execv, killing themselves and starting up a new, separate program. (For the curious, they're simulating a "server dead" state by starting up an idling process, which may then receive signals to start up the original process again.)
The problem is that, after the execv, this new process no longer responds to a kill sent by the bash script.
Is there a way to allow for this original script's execution (and subsequent signalling) to also send a signal to the newly exec'd process as well?
I suggest you consider searching for child processes by the parent pid. More specifically, before killing a pid, search for the child processes of that pid using ps and kill those children first. Finally, kill the parent.
I have a feeling that there are race conditions that will cause this to fail in some cases.
Update: my issue was completely unrelated to this script; the pid of the running/rebooted processes never changed, but I was inadvertently inheriting the signals I was blocking in various threads internal to the program. Some sensible calls to pthread_sigmask solved the issue.
Thanks to #Mark for the tips about child processes though; if there were fork calls going on, that would have been a very good approach!

How to wait for multiple instance of one program to complete in linux?

How can I wait for multiple instance of same program to finish ? Below is my scenario any suggestion or pointers ?
I need to restart a running C process. After googling for long time, I figured out restarting can only done by fork and exec(I need the new instance to have a different pid than the original one hence using only exec wont work). Below is the sequence i did.
Shell 1:
Bash script
1. Start the first instance(./test.exe lets say pid 100)
2. Wait for it complete(pid 100) <<< Here need to wait for all instances of test.exe to complete
Shell 2:
1. Send a signal to above process(pid-100)
2. In signal handler had fork(new pid 200) a new process with exec command(./test.exe --restart) and kill parent (pid 100)
Now my question is that how can wait for all instances of test.exe to complete in shell1's bash script ?(basically have to wait until pid 200 is completed)
With my current approach shell1's bash script exits as soon as I send signal to kill pid 100
Update:
Actually I am looking for some bash/unix command to wait for all instances of test.exe is finished. Something like - 'wait $(pgrep -f test.exe)'
Basically you are looking for inter-process synchronisation mechanisms, i.e. inter-process semaphores and inter-process mutexes.
Two approaches that come to mind are POSIX semaphores and the older System V semaphores. I would recommend the former.
Also check out this SO reply.
Hope this helps :)

Less Hacky Way Than Using System() Call?

So I have this old, nasty piece of C code that I inherited on this project from a software engineer that has moved on to greener pastures. The good news is... IT RUNS! Even better news is that it appears to be bug free.
The problem is that it was designed to run on a server with a set of start up parameters input on the command line. Now, there is a NEW requirement that this server is reconfigurable (didn't see that one coming...). Basically, if the server receives a command over UDP, it either starts this program, stops it, or restarts it with new start up parameters passed in via the UDP port.
Basically the code that I'm considering using to run the obfuscated program is something like this (sorry I don't have the actual source in front of me, it's 12:48AM and I can't sleep, so I hope the pseudo-code below will suffice):
//my "bad_process_manager"
int manage_process_of_doom() {
while(true) {
if (socket_has_received_data) {
int return_val = ParsePacket(packet_buffer);
// if statement ordering is just for demonstration, the real one isn't as ugly...
if (packet indicates shutdown) {
system("killall bad_process"); // process name is totally unique so I'm good?
} else if (packet indicates restart) {
system("killall bad_process"); // stop old configuration
// start with new parameters that were from UDP packet...
system("./my_bad_process -a new_param1 -b new_param2 &");
} else { // just start
system("./my_bad_process -a new_param1 -b new_param2 &");
}
}
}
So as a result of the system() calls that I have to make, I'm wondering if there's a neater way of doing so without all the system() calls. I want to make sure that I've exhausted all possible options without having to crack open the C file. I'm afraid that actually manipulating all these values on the fly would result in having to rewrite the whole file I've inherited since it was never designed to be configurable while the program is running.
Also, in terms of starting the process, am I correct to assume that throwing the "&" in the system() call will return immediately, just like I would get control of the terminal back if I ran that line from the command line? Finally, is there a way to ensure that stderr (and maybe even stdout) gets printed to the same terminal screen that the "manager" is running on?
Thanks in advance for your help.
What you need from the server:
Ideally your server process that you're controlling should be creating some sort of PID file. Also ideally, this server process should hold an exclusive lock on the PID file as long as it is still running. This allows us to know if the PID file is still valid or the server has died.
Receive shutdown message:
Try to get a lock on the PID file, if it succeeds, you have nothing to kill (the server has died, if you proceed to the kill regardless, you may kill the wrong process), just remove the old PID file.
If the lock fails, read the PID file and do a kill() on the PID, remove the old PID file.
Receive start message:
You'll need to fork() a new process, then choose your flavor of exec() to start the new server process. The server itself should of course recreate its PID file and take a lock on it.
Receive restart message:
Same as Shutdown followed by Start.

Pipes and Forks, does not get displayed on stdout

I'm working on my homework which is to replicate the unix command shell in C.
I've implemented till single command execution with background running (&).
Now I'm at the stage of implementing pipes and I face this issue, For pipes greater than 1, the children commands with pipe are completed, but the final output doesn't get displayed on stdout (the last command's stdin is replaced with read of last pipe)
dup2(pipes[lst_cmd], 0);
I tried fflush(STDIN_FILENO) at the parent too.
The exit of my program is CONTROL-D, and when i press that, the output gets displayed (also exits since my operation on CONTROL-D is to exit(0)).
I think the output of pipe is in the stdout buffer but doesn't get displayed. Is there anyother means than fflush to get the stuff in the buffer to stdout?
Having seen the code (unfair advantage), the primary problem was the process structure combined with not closing pipes thoroughly.
The process structure for a pipeline ps | sort was:
main shell
- coordinator sub-shell
- ps
- sort
The main shell was creating N pipes (N = 1 for ps | sort). The coordinator shell was then created; it would start the N+1 children. It did not, however, wait for them to terminate, nor did it close its copy of the pipes. Nor did the main shell close its copy of the pipes.
The more normal process structure would probably do without the coordinator sub-shell. There are two mechanisms for generating the children. Classically, the main shell would fork one sub-process; it would do the coordination for the first N processes in the pipeline (including creating the pipes in the first place), and then would exec the last process in the pipeline. The main shell waits for the one child to finish, and the exit status of the pipeline is the exit status of the child (aka the last process in the pipeline).
More recently, bash provides a mechanism whereby the main shell gets the status of each child in the pipeline; it does the coordination.
The primary fixes (apart from some mostly minor compilation warnings) were:
main shell closes all pipes after forking coordinator.
main shell waits for coordinator to complete.
coordinator closes all pipes after forking pipeline.
coordinator waits for all processes in pipeline to complete.
coordinator exits (instead of returning to provide duelling dual prompts).
A better fix would eliminate the coordinator sub-shell (it would behave like the classical system described).

how to retrieve process status whose procees id is given from c program?

I have to retrieve process status(whether process is running or stopped) whose procees id is given from my c program(i am using linux). i planned to use exec command
and written below statement
execv("ps -el|grep |awk '{print $2}'",NULL);
But it is not giving me desired output.
Please let me know where i am wrong.
The third field in /proc/<pid>/stat contains the process status: R if it's Running, S if it's Sleeping (there's a few others too, like D for Disk Wait and Z for Zombie).
The exec call returns the error code corresponding to whether the execution of the program was successful or not.
If you fork a child process and then exec the command in the child process, you can read the its exit status in the parent process using the waitpid call.
I doubt exec is the family of calls you require here. system(3) might be more ideal.

Resources