Combine parallel processing with swim lanes? - plantuml

I want to build something like this using PlantUML:
But I'm only able to build this:
This is my code:
#startuml
start
:Init;
fork
:Treatment 1;
fork again
:Treatment 2;
end fork
#enduml

As far as I understand Activity Diagrams
you can parallelize things into swimlanes, but there's only one start
everything has to be in a swimlane (so I proposed "Control" as the start of your diagram)
PlantUML (and maybe UML in general) expects you to have a join after a fork (which again, I did in the "Control" lane):
#startuml
|Control|
:Init;
fork
|#AntiqueWhite|Swimlane1|
'start
:foo1;
'stop
fork again
|Swimlane2|
'start
:foo2;
:foo3;
'stop
|Control|
end fork
stop
#enduml
The PlantUML layout also isn't ideal, since the fork line stays only in the swimlane of the Control process.

This is possible by combining an extra column for forking and hiding the start arrows:
#startuml
| |
|a| Swim1
|b| Swim2
|c| Swim3
| |
fork
|a|
-[hidden]->
start
if (A?) then (no)
stop
else (yes)
if (B?) then (no)
stop
else (yes)
:C;
endif
endif
stop
fork again
|b|
-[hidden]->
start
:Process 2;
stop
fork again
|c|
-[hidden]->
start
:Process 3;
stop
| |
end fork
#enduml

Not a complete solution but:
#startuml
start
:Init;
fork
start
:Treatment 1;
end
fork again
start
:Treatment 2;
end
end fork
#enduml
see also:
the code through the plantuml web server:
http://www.plantuml.com/plantuml/uml/SoWkIImgAStDuG8pkAmyyp9BhBdIyekoOI8XHQc99RcfUIKAXjPSgNafO4c5nFJ4p3nC9KPW9I2i03R30SW2cWu0
the corresponding png:

Related

How to use alias in activity diagram of plantUML?

I want to draw a diagram like below.
And my origin code is :
#startuml
start
if (c1) then (YES)
:A;
else (NO)
if (C2) then (NO)
:A;
else (NO)
:C;
endif
endif
stop
#enduml
It seems that there is no alias syntax in new plantUml syntax. I've found the old syntax is --> "some activity" as render.
How can I refer to the same activity?
Maybe you saw this experimental feature already?
You can use label and goto keywords to denote goto processing, with:
label <label_name>
goto <label_name>
#startuml
start
if (c1) then (YES)
label sp_lab0
label A ' define the label to later reference it with goto
:A;
else (NO)
if (C2) then (NO)
label sp_lab1
goto A ' reference label to goto
else (NO)
:C;
endif
endif
stop
#enduml

Get batch script exitCode with ExecDos Plugin from NSIS

How do I get the exitCode from a batch file (as well as the output written to the DetailView window)?
From the documention:
Use 'wait' call if you want to get exit code. (/NOUNLOAD is mandatory!)
So something like this:
ExecDos::wait /NOUNLOAD /DETAILED "$INSTDIR\bin\checkJavaVersion.bat"
(I haven't dealt with specifying which window to output to yet)
How do I access the exitCode?
You first exec and then you wait. wait does not start the process so you cannot pass the command line to it. The documentation you linked to has an example:
ExecDos::exec /NOUNLOAD /ASYNC "$EXEDIR\consApp.exe" "test_login$\ntest_pwd$\n" "$EXEDIR\execdos.log"
Pop $0 # thread handle for wait
# you can add some installation code here to execute while application is running.
ExecDos::wait $0
Pop $1 # return value
MessageBox MB_OK "Exit code $1"

Fake parallelization in script over loop (foreach line) without substantial changes in code

I am new to GNU Parallel and I will be glad if you point out some errors and misunderstandings. I read the manual but it says basically about one-stage operation in which it is necessary to specify the definition of "action" in the syntax GNU Parallel (unpacking, moving and etc) and nothing is specified about the multi-stage steps when you need to perform a few actions without changing (significantly) the code (if the course is at all possible)
Is it possible to "fake" parallel processing in the code that does not support it?
The code has a loop (there are included list of files in any format, and at some point it comes to loop) and all you need that code to perform certain actions (no matter what kind of actions) on all files simultaneously rather than sequentially (without changing the code substantially or only around 138 line - see below). It's that kind of parallel processing is not required to split files or something like that, but just to processing all files at once.
As example: here is a part of code that interests, full code here - 138 line GMT
# <code> actions (see full code - link below) and check input file availability
#loop
#
foreach line (`awk '{print $0}' $1`)
# <code> actions (see full code - link below)
end if
Source, full code: GMT
Maybe it can be implemented using other tools besides the GNU Parallel? Any help is useful. It is desirable for example if any. And if you make all of the code parallel, it probably will cause problems. It's necessary at the moment of the loop.
Thanks
csh has many limitations; lack of functions is one of them, and any script that's longer than a few lines will quickly turn into a spaghetti mess. This is an important reason why scripting in csh is typically discouraged.
That being said, the easiest way to modify this is to extract the loop body out to a separate script and call that with & appended. For example:
main.csh:
#!/bin/csh
foreach line (`awk '{print $0}' $1`)
./loop.csh "$line" &
end
loop.csh:
#!/bin/csh
set line = "$1"
echo "=> $line"
sleep 5
You may need to add more parameters than just $line; I didn't check the entire script.
The & will make the shell continue without waiting for the command to finish. So if there are 5,000 lines you will be running 5,000 processes at the same time. To exercise some control over the number of simultaneous processes you could use the parallel tool instead of a loop:
#!/bin/csh
awk '{print $0}' $1 | parallel ./loop.csh`
Or if you want to stick with loops you can use pgrep to limit the maximum number of simultaneous processes:
foreach line (a b c d e f g h i)
set numprocs = `pgrep -c loop.csh`
if ( $numprocs > 2 ) then
sleep 2
continue
endif
./loop.csh "$line" &
end
If it is acceptable to move the inner part of the loop into a script:
parallel inner.csh ::: a b c d e f g h i
If inner.csh uses variables, then setenv them before running parallel:
setenv myvar myval
parallel inner.csh ::: a b c
a, b, and c will be passed as the first arg to inner.csh. To read the arguments from a file use:
cat file | parallel inner.csh
This also works for reading output from awk:
awk ... | parallel ...
Consider walking through the tutorial. Your commandline will love your for it: https://www.gnu.org/software/parallel/parallel_tutorial.html

multi-threading in bash script and echo pids from loop

I want to run a command with different arguments in multi-threading form,
What I tried is:
#!/bin/bash
ARG1=$1
ARG2=$2
ARG3=$3
for ... #counter is i
do
main command with ARG1 ARG2 ARG3 & a[i]=$!
done
wait `echo ${a[#]}`
I used & a[i]=$! in for loop and wait $(echo ${a[#]}) after for loop. I want my bash to wail till all threads finish then echo their pid for me...
But when I run my script after some time it waits.
Thank you
I think you want this:
#!/bin/bash
for i in 0 1 2
do
sleep 3 & a[$i]=$!
done
wait
echo ${a[#]}
You are missing the $ on the array index $i in your script. Also, you don't need to say which PIDs you are wating for if you are waiting for all of them. And you also said you wanted to see the list of PIDs at the end.

zombie process can't be killed

Is there a way to kill a zombie process? I've tried calling exit to kill the process and even sending SIGINT signal to the process, but it seems that nothing can kill it. I'm programming for Linux.
Zombie processes are already dead, so they cannot be killed, they can only be reaped, which has to be done by their parent process via wait*(). This is usually called the child reaper idiom, in the signal handler for SIGCHLD:
while (wait*(... WNOHANG ...)) {
...
}
Here is a script I created to kill ALL zombie processes. It uses the GDB debugger to attach to the parent process and send a waitpid to kill the zombie process. This will leave the parent live and only slay the zombie.
GDB debugger will need to be installed and you will need to be logged in with permissions to attach to a process. This has been tested on Centos 6.3.
#!/bin/bash
##################################################################
# Script: Zombie Slayer
# Author: Mitch Milner
# Date: 03/13/2013 ---> A good day to slay zombies
#
# Requirements: yum install gdb
# permissions to attach to the parent process
#
# This script works by using a debugger to
# attach to the parent process and then issuing
# a waitpid to the dead zombie. This will not kill
# the living parent process.
##################################################################
clear
# Wait for user input to proceed, give user a chance to cancel script
echo "***********************************************************"
echo -e "This script will terminate all zombie process."
echo -e "Press [ENTER] to continue or [CTRL] + C to cancel:"
echo "***********************************************************"
read cmd_string
echo -e "\n"
# initialize variables
intcount=0
lastparentid=0
# remove old gdb command file
rm -f /tmp/zombie_slayer.txt
# create the gdb command file
echo "***********************************************************"
echo "Creating command file..."
echo "***********************************************************"
ps -e -o ppid,pid,stat,command | grep Z | sort | while read LINE; do
intcount=$((intcount+1))
parentid=`echo $LINE | awk '{print $1}'`
zombieid=`echo $LINE | awk '{print $2}'`
verifyzombie=`echo $LINE | awk '{print $3}'`
# make sure this is a zombie file and we are not getting a Z from
# the command field of the ps -e -o ppid,pid,stat,command
if [ "$verifyzombie" == "Z" ]
then
if [ "$parentid" != "$lastparentid" ]
then
if [ "$lastparentid" != "0" ]
then
echo "detach" >> /tmp/zombie_slayer.txt
fi
echo "attach $parentid" >> /tmp/zombie_slayer.txt
fi
echo "call waitpid ($zombieid,0,0)" >> /tmp/zombie_slayer.txt
echo "Logging: Parent: $parentid Zombie: $zombieid"
lastparentid=$parentid
fi
done
if [ "$lastparentid" != "0" ]
then
echo "detach" >> /tmp/zombie_slayer.txt
fi
# Slay the zombies with gdb and the created command file
echo -e "\n\n"
echo "***********************************************************"
echo "Slaying zombie processes..."
echo "***********************************************************"
gdb -batch -x /tmp/zombie_slayer.txt
echo -e "\n\n"
echo "***********************************************************"
echo "Script complete."
echo "***********************************************************"
Enjoy.
A zombie process is a process id (and associated termination status and resource usage information) that has not yet been waited for by its parent process. The only ways to eliminate it are to get its parent to wait for it (sometimes this can be achieved by sending SIGCHLD to the parent manually if the parent was just buggy and had a race condition where it missed the chance to wait) but usually you're out of luck unless you forcibly terminate the parent.
Edit: Another way, if you're desperate and don't want to kill the parent, is to attach to the parent with gdb and forcibly call waitpid on the zombie child.
kill -17 ZOMBIE_PID
OR
kill -SIGCHLD ZOMBIE_PID
would possibly work, bu tlike everyone else said, it is waiting for the parent to call wait() so unless the parent dies without reaping, and it got stuck there for some reason you might not want to kill it.
if I recall correctly, killing the parent of a zombie process will allow the zombie process to die.
use ps faux to get a nice hierarchical tree of your running processes showing parent/child relationships.
See unix-faqs "How do I get rid of zombie processes that persevere?"
You cannot kill zombies, as they are already dead. But if you have too many zombies then kill parent process or restart service.
You can try to kill zombie process using its pid
kill -9 pid
Please note that kill -9 does not guarantee to kill a zombie process

Resources