Issue
I am looking to find a way to run a function in parallel and knowing exactly when all function instances finished. I added a spinner (the instances of the function run a different time, depends on the variables, so i need to see something on the screen) and tried it the following way:
for a in "${ARRAY[@]}"; do
spin='/-\|'
while true; do
i=$(( (i+1) %4 )); printf "\r[ ${spin:$i:1} ] "; sleep .3;
done & someFunction &
kill $!; trap 'kill $!' SIGTERM
done
But someFunction
doesn't work. I assume because if it is fired as someFunction &
it gets instantly killed by the next code line. And also the spinner keeps working forever on CLI after the script ends.
What is the way to run someFunction
simultaneously, wait until the last instace finishes, and keep the spinner until then?
Solution
Because of the way that Bash manages background processes there is usually a small chance that killing a process by PID will kill the wrong process. If you are using Bash 4.3 (released in 2014) or later, try this kill-free code:
#! /bin/bash -p
# FIXME: Define the 'args' array and the 'someFunction' function
# Run a spinner as a coprocess
coproc spinner \
{
declare -r spin='/-\|'
declare i=0
while read -r -t 0.3 _; (( $? > 128 )); do
printf '\r[ %s ] ' "${spin:i++%4:1}" >&2
done
}
# Disown the spinner coprocess so 'wait' will not wait for it to exit
disown %%
for a in "${args[@]}"; do
someFunction "$a" &
done
# Wait for all background processes (except the disowned spinner) to exit
wait
# Close the pipe to the spinner. It will read EOF and close.
exec {spinner[1]}>&-
- Running the spinner as a coprocess makes it possible to stop it by closing its input pipe instead of killing it. It also makes it possible to disown the spinner process so wait doesn't wait for it to exit.
- As described in the read documentation,
read -r -t 0.3 _
will return with a status greater than 128 if it fails to read an input line in 0.3s. Usingread
instead ofsleep
for the delay in the spinner saves resources by avoiding running a subprocess for every delay. exec {spinner[1]}>&-
closes the (write end of) the pipe to the coprocess. This will cause theread
in the coprocess to return with a status less than 128, causing the coprocess to exit immediately. Without the explicit close of the pipe the coprocess will continue to run until the top-level process exits (at which point the OS will close the pipe automatically). (So the explicit close is unnecessary if the top-level process exits immediately after thewait
, but I would do it anyway to allow for more code being added later (or the code being pasted into a bigger program).)exec {spinner[1]}>&-
doesn't work in versions of Bash older than 4.3. One way to work around the problem for versions of Bash back to 4.0 is to replace it witheval "exec ${spinner[1]}>&-"
. Use ofeval
is best avoided (see Why should eval be avoided in Bash, and what should I use instead?), but I don't know of an alternative that works for Bash 4.0 in this case.- ALL_UPPERCASE variable names are best avoided because there is a danger of clashes with the large number of special ALL_UPPERCASE variables that are used in shell programming. See Correct Bash and shell script variable capitalization. That's why I replaced
ARRAY
in the original code withargs
.
Answered By - pjh Answer Checked By - Candace Johnson (WPSolving Volunteer)