Tuesday, November 16, 2021

[SOLVED] Preserve side-effect and capture stdout and stderr of a bash function in separate variables?

Issue

Suppose the bash function dump modifies a variable and prints different contents to stdout and stderr:

function dump() {
  var1=1
  echo OUT
  echo ERR 1>&2
}

The variable var1 could be defined in the context of its caller or could be initially defined by the function itself. If var1 is defined in the caller, the modification of var1 inside the function dump should be propagated to the caller. If var1 is initially defined inside dump, there are no requirements on whether var1 should be propagated to the caller or not.

Given this bash script may run on a system where all mounted file systems are read-only and no tmpfs are mounted, the workaround of this problem should not employ any redirect to file operations. The script may run without root privilege or any capabilities. Commands like unshare may not work, as user namespaces are not permitted. You may also assume that the bash built-in network files /dev/tcp and /dev/udp are patched out.

Is there any way to capture the stdout and stderr of dump in different variables, while exposing variable modifications(the side effects)?


Solution

s there any way to capture the stdout and stderr of dump in different variables, while exposing variable modifications(the side effects)?

Sure. Just use temporary files. For example:

stdoutf=$(mktemp)
stderrf=$(mktemp)
dump >"$stdoutf" 2>"$stderrf"
stderr=$(<"$stderrf")
sdtout=$(<"$stdoutf")
rm "$stderrf" "$stdoutf"

file systems are read-only and no tmpfs are mounted

Many shell commands will stop working then. - shell primarily works on files, without files shell is hard. Still, I guess that /dev/shm has to be mounted, and you can use it as temporary place.

not employ any redirect to file operations

Ugh, asking for trouble... Anyway, you can buffer data in subprocesses memory and transfer data with temporary fifos or just using shared memory. You can write a bash builtin to call mmap() and share the fd between processes and store data there and read from parent (I remember seeing sources of such a program on GitHub, but I can't find it now). Anyway, here's an example with handy coproc:

#!/bin/bash

dump() {
  var1=1
  echo OUT
  echo ERR 1>&2
}

bufferer() {
    stdout=""
    stderr=""
    while IFS= read -r line; do
        case "$line" in
        stderr*) stderr+=${line#stderr}$'\n'; ;;
        stdout*) stdout+=${line#stdout}$'\n'; ;;
        esac
    done
    cat <<<"$stdout"
    echo MARK
    cat <<<"$stderr"
}

coproc { bufferer; }

exec 9>&"${COPROC[1]}"
dump > >(xxd -p | sed 's/^/stdout/' >&9) 2> >(xxd -p | sed 's/^/stderr/' >&9)
eval "exec ${COPROC[1]}<&-"
exec 9>&-

out=$(cat <&"${COPROC[0]}")
eval "exec ${COPROC[0]}<&-"

stdout=$(printf "%s" "${out%MARK*}" | xxd -p -r)
stderr=$(printf "%s" "${out#*MARK}" | xxd -p -r)

declare -p var1 stdout stderr

the script outputs:

declare -- var1="1"
declare -- stdout="OUT"
declare -- stderr="ERR"


Answered By - KamilCuk