gauche.process
- High-level process interface ¶This module provides a higher-level API of process control, implemented on top of low-level system calls. This module also provides “process ports”, a convenient way to send/receive information to/from subprocesses.
Note: This module used to provide
shell-escape-string
and shell-tokenize-string
, but they’re
purely text manipulation and not really related to subprocesses, so
we moved them to text.sh
. See text.sh
- Shell text utilities, for the
details. For the backward compatibility, we keep exporting those
two procedures from gauche.process
.
• Running subprocess: | ||
• Running process pipeline: | ||
• Process object: | ||
• Process ports: | ||
• Process connection: |
{gauche.process
}
Runs a command with arguments given to cmd/args in a subprocess.
The cmd/args argument must be a list, whose car specifies
the command name and whose cdr is the command-line arguments.
If the command name contains a slash, it is taken as the
pathname of the executable. Otherwise the named command
is searched from the directories in the PATH
environment variable.
Each element in cmd/args are converted to a string
by x->string
, for the convenience.
Do-process
always waits the subprocess to terminate, and
returns #t
if it exits successfully (i.e. with zero exit status).
If the subprocess terminates abnormally, the behavior is controled
by on-abnormal-exit keyword argument:
#f
(default), #f
is returned.
Note that this default is different from call-with-input-process
etc.,
which raise an error by default. It is because do-process
is
intended to be used with conditionals
like shell-script’s if
command; i.e.
(if (do-process command) then-expr else-expr)
works just
like shell’s if command; then then-command; else else-command; fi
.
:error
, a <process-abnormal-exit>
error is thrown.
:exit-code
, and the subprocess terminates with
non-zero exit status, the integer exit code (the integer value passed
to subprocess’s exit()
) is returned. Note that if the
subprocess is terminated with a signal, a <process-abnormal-exit>
is still thrown.
Do-process!
is like do-process
except that
it raises <process-abnormal-exit>
error when the process
exists with non-zero status. It’s the same behavior as
giving :error
to the on-abnormal-exit
keyword argument
of do-process
. This is convenient if you want to let the script
just fail when the command fails.
Run-process
can run the subprocess concurrently by default,
that is, it returns immediately. The return value is a <process>
object, which can be used to track the status of the subprocess
(see Process object).
For example, the following expression runs ls -al
.
(do-process '(ls -al))
You see the output of ls -al
, then it returns #t
,
unless the execution of ls
command fails with some reason.
Since do-process
returns the success or failure of the command
by a boolean value, you can use and
and or
to combine
commands pretty much the same way as shell’s &&
and ||
operators.
;; shell: make && make -s check (and (do-process '(make)) (do-process '(make -s check))) ;; shell: mv x.tmp x.c || rm -f x.tmp (or (do-process '(mv x.tmp x.c)) (do-process '(rm -f x.tmp)))
If you use run-process
instead, you’ll get <process>
object
without waiting ls -al
to finish. If you run the following expression
on REPL, you’ll likely to see the return value before output of ls
.
(run-process '(ls -al))
You can keep the returned <process>
object
and call process-wait
on it to wait for its termination.
See Process object, for the details of process-wait
.
(let1 p (run-process '(ls -al)) ... do some other work ... (process-wait p))
You can tell run-process
to wait for the subprocess to exit; in
that case, run-process
calls process-wait
internally.
It is useful if you want to examine the exit status of the subprocess,
rather than just caring its success/failure as do-process
does.
Note that -i
is read as an imaginary number,
so be careful to pass -i
as a command-line
argument; you should use a string, or write |-i|
to make it
a symbol.
(run-process '(ls "-i"))
Note: An alternative way to run external process is sys-system
,
which takes a command line as a single string (see Process management).
The string is passed to the shell to be interpreted,
so you can include redirections, or can pipe several commands.
It would be handy for quick throwaway scripts.
On the other hand, with sys-system
,
if you want to change command parameters
at runtime, you need to worry about properly escape them
(actually we have one to do the job:
shell-escape-string
, see text.sh
- Shell text utilities);
you need to be aware that /bin/sh
, used by
sys-system
via system(3)
call, may differ among
platforms and be careful not to rely on specific features on
certain systems. As a rule of thumb, keep sys-system
for
really simple tasks with a constant command line,
and use run-process
and do-process
for all other stuff.
Note: Old version of this procedure took arguments differently,
like (run-process "ls" "-al" :wait #t)
, which was compatible
to STk. This is still supported but deprecated.
Large number of keyword arguments can be passed to
do-process
and
run-process
to control execution of the child process. We describe them by
categories.
This can only be given to run-process
.
If flag is true, run-process
waits until the
subprocess terminates, by calling process-wait
internally.
Otherwise the subprocess runs asynchronously
and run-process
returns immediately, which is the default behavior.
Note that if the subprocess is running asynchronously, it is the
caller’s responsibility to call process-wait
at a certain
timing to collect its exit status.
;; This returns after wget terminates. (define p (run-process '(wget http://practical-scheme.net/) :wait #t)) ;; Check the exit status (let1 st (process-exit-status p) (cond [(sys-wait-exited? st) (print "wget exited with status " (sys-wait-exit-status st))] [(sys-wait-signaled? st) (print "wget interrupted by signal " (sys-wait-termsig st))] [else (print "wget terminated with unknown status " st)]))
This can only be given to do-process
.
If how is #f
, which is the default,
do-process
returns #f
when
the subprocess exits abnormally (i.e. with nonzero exit status).
If how is :error
, it raises an error in such a case.
If flag is true, do-process
and
run-process
forks to run
the subprocess, which is the default behavior. If flag is
false, do-process
and
run-process
directly calls sys-exec
, so
it never returns.
Specifies how to redirect child process’s I/Os. Each iospec can be one of the followings, where fd, fd0, and fd1 are nonnegative integers referring to the file descriptor of the child process.
(Note: If you just want to run a command and get its output as a string
take a look at process-output->string
(see Process ports).
If you want to pipe multiple commands together,
see Running process pipeline.)
(< fd source)
source can be a string, a symbol, a keyword :null
,
an integer, or an input port.
If it is a string, it names a file opened for read and the child process can reads the content of the file from fd. An error is signaled if the file does not exist or cannot open for read.
If it is a symbol, an unidirectional pipe is created, whose reader end
is connected to the child’s fd, and whose writer end is
available as an output port returned from
(process-input process source)
.
If it is :null
, the child’s fd is connected to the
null device.
If it is an integer, it should specify a parent’s file descriptor opened for read. The child sees the duped file descriptor as fd.
If it is an input port, the underlying file descriptor is duped
into child’s fd. It is an error to pass an input port without
associated file descriptor (See port-file-number
in
Common port operations).
(<< fd value)
(<<< fd obj)
Feeds value or obj to the input file descriptor fd of the child process.
With <<
, value must be either a string or a uniform
vector (see Uniform vectors). It is sent to the child process as is.
Using a uniform vector is good to pass binary content.
With <<<
, obj can be any Scheme object, and
the result of (write-to-string obj)
is sent to the child process.
(<& fd0 fd1)
Makes child process’s file descriptor fd0 refer to the same input
as its file descriptor fd1. Note the difference from <
;
(< 3 0)
makes the parent’s stdin (file descriptor 0) be read
by the child’s file descriptor 3, while (<& 3 0)
makes
the child’s file descriptor 3 refer to the same input as child’s stdin
(which may be redirected to a file or something else by another iospec).
See the note below on the order of processing <&
.
(> fd sink)
(>> fd sink)
sink must be either a string, a symbol, a keyword :null
,
an integer or a file output port.
If it is a string, it names a file. The output of the child to
the file descriptor fd is written to the file.
If the named file already exists, >
first truncates its
content, while >>
appends to the existing content.
For other arguments, >
and >>
works the same.
If sink is a symbol, an unidirectional pipe is created
whose writer end is connected to the child’s fd, and whose
reader end is available as an input port returned by
(process-output process sink)
.
If sink is :null
, child’s fd is connected
to the system’s null device.
If sink is an integer, it must specify a parent’s file descriptor opened for output. The child sees the duped file descriptor as fd.
If sink is an output port, the underlying file descriptor is duped into fd in the child process.
(>& fd0 fd1)
Makes child process’s file descriptor fd0 refer to the same output
as its file descriptor fd1. Note the difference from >
;
(> 2 1)
makes the child’s stderr go to parent’s stdout,
while (>& 2 1)
makes the child’s stderr go to the same
output as child’s stdout (which may be redirected by another iospec).
;; Read both child's stdout and stderr (let1 p (run-process '(command arg) :redirects '((>& 2 1) (> 1 out))) (begin0 (port->string (process-output p 'out)) (process-wait p)))
Note: You can’t use the same name (symbol) more than once for the pipe of source or sink. For example, the following code signals an error:
(run-process '(command) :redirects '((> 1 out) (> 2 out))) ; error!
You can use >&
to “merge” the output to one sink,
or <&
to “split” the input from one source, instead:
(run-process '(command) :redirects '((> 1 out) (>& 2 1)))
It is allowed to give the same file name more than once, just like the Unix shell. However, note that the file is opened individually for each file descriptor, so simply writing to them may not produce desired result (for regular files, most likely that one output would overwrite another).
Note: I/O redirections are processed at once, unlike the way unix shell does. For example, both of the following expression works the same way, that is, they redirect both stdout and stderr to a file out.
(run-process '(command arg) :redirects '((>& 2 1) (> 1 "out"))) (run-process '(command arg) :redirects '((> 1 "out") (>& 2 1)))
Most unix shells process redirections in order, so the following
two command line works differently: The first one redirects child’s
stderr to the current stdout, which is the same as the parent’s
stdout, then redirects child’s stdout to a file out. So the
error messages appear in the parent’s stdout. The second one first
redirects the child’s stdout to a file out, so at the time
of processing 2>&1
, the child’s stderr also goes to the file.
$ command arg 2>&1 1>out $ command arg 1>out 2>&1
You can say do-process
and
run-process
always works like the latter,
regardless of the order in redirects argument.
If you want to redirect child’s stderr to parent’s stdout,
you can use >
like the following:
(run-process '(command arg) :redirects '((> 2 1) (> 1 "out")))
Redirects child’s standard i/o.
source and sink may be either a string, one
of keywords :null
, :pipe
, or merge
,
an integer file descriptor, or a symbol.
These are really shorthand notations of the redirects argument:
:input x ≡ :redirects '((< 0 x)) :output x ≡ :redirects '((> 1 x)) :error x ≡ :redirects '((> 2 x))
The keyword :pipe
as source or sink is supported
just for the backward compatibility. They work as if a symbol
stdin
, stdout
or stderr
is given, respectively:
:input :pipe ≡ :redirects '((< 0 stdin)) :output :pipe ≡ :redirects '((> 1 stdout)) :error :pipe ≡ :redirects '((> 2 stderr))
That is, a pipe is created and its one end is connected to the
child process’s stdio, and the other end is available
by calling (process-input process)
,
(process-output process)
or (process-error process)
.
(That is because process-input
and process-output
uses stdin
and stdout
respectively when name argument
is omitted, and (process-error p)
is equivalent
to (process-output p 'stderr)
.)
The keyword :merge
can only be used for :error
keyword
argument, and it is a shorthand notation of
:redirects '((>& 2 1))
, that is, merge the child process’s
stderr into child process’s stdout.
See the description of redirects above for the meanings of the argument values.
The argument must be #f
(default) or a list of strings,
each of which has the form NAME=VALUE
.
If it is #f
, the subprocess inherits the same environment
variables as the current process. If it is a list of strings,
it specifies the environment variables the subprocess sees.
The form is the same as what sys-environ
returns, so you
can easily add to/remove from the current environment
(see Environment inquiry).
If a string is given to directory,
the process starts with directory as its working directory.
If directory is #f
, this argument is ignored.
An error is signaled if directory is other type of objects,
or it is a string but is not a name of a existing directory.
When host keyword argument is also given, this argument specifies the working directory of the remote process.
Note: do-process
and
run-process
check the validity of directory,
but actual chdir(2)
is done just before exec(2)
,
and it is possible that chdir
fails in spite of previous
checks. At the moment when chdir
fails, there’s no
reliable way to raise an exception to the caller,
so it writes out an error message to standard error port and exits.
A robust program may take this case into account.
Mask must be either an instance of <sys-sigset>
,
a list of integers, or
#f
. If an instance of <sys-sigset>
is given, the
signal mask of executed process is set to it. A list of integers
are treated as a list of signals to mask. It is important
to set an appropriate mask if you call run-process
from
multithreaded application.
See the description of sys-exec
(Process management)
for the details.
If the host keyword argument is specified, this argument
merely sets the signal mask of the local process (ssh
).
When a true value is given, the new process is detached from
the parent’s process group and belongs to its own group.
It is useful when you run a daemon process.
See sys-fork-and-exec
(see Process management), for
the detailed description of detached argument.
This argument is used to execute command on the remote host.
The full syntax of hostspec is protocol:user@hostname:port
,
where protocol:, user@, or :port part can be
omitted.
The protocol part specifies the protocol to communicate
with the remote host; currently only ssh
is supported, and
it is also the default when protocol is omitted.
The user part specifies the login name of the remote host.
The hostname specifies the remote host name, and the
port part specifies the alternative port number which
protocol connects to.
The command line arguments are interpreted on the remote host. On the other hand, the I/O redirection is done on the local end. For example, the following code reads the file /foo/bar on the remote machine and copies its content into the local file baz in the current working directory.
(do-process '(cat "bar") :host "remote-host.example.com" :directory "/foo" :output "baz")
{gauche.process
}
Convenience routines to run pipeline of processes at once.
Example:
(do-pipeline '((ls "src/") (grep "\\.c$") (wc -l)))
This is equivalent to shell command pipeline
ls src/ | grep '\.c$' | wc -l
, i.e. shows the
number of C source files in the src subdirectory.
The commands argument is a list of lists. Each list
must be cmd/args
argument do-process
/run-process
can accept.
At least one command must be specified.
The specified commands will run concurrently, with the stdout of
the first command is connected to the stdin of the second, and
stdout of the second to the stdin of the third, and so on.
The stdin of the first command is fed from the source specified
by the input keyword argument, and the stdout of the last
command is sent to the sink specified by the output keyword
argument. The default values of these are the calling process’s
stdin and stdout, respectively. See do-process
/run-process
,
for the possible values of these arguments
(see Running subprocess).
The stderr of all the processes are sent to the sink specified by the error keyword argument, which is defaulted by the calling process’s stderr.
Like do-process
, do-pipeline
waits for completion of
all the processes, and returns #t
if the tail process
succeeds (i.e. exits with zero status) or #f
if the last process
fails (i.e. exits with non-zero status). If you give
:error
to on-abnormal-exit
keyword arguments, however,
a failure of the tail process raises an error.
Exit statuses of subprocesses other than the tail one are
collected by process-wait
, but won’t affect the
return value, and won’t cause an error even
on-abnormal-exit
is :error
.
On the other hand, run-pipeline
returns a <process>
object of
the tail process. You can get other process objects in
the pipeline by applying process-upstreams
to the tail process.
By default, run-pipeline
runs all the subprocesses in background
and returns immediately. Calling process-wait
on the returned
process object will waits for all the subprocesses.
If you give a true value to wait keyword argument,
run-process
waits for all the subprocesses to finish
before returning.
The directory and sigmask keyword arguments are applied
to all the processes;
see do-process
/run-process
for the description
of these arguments
(see Running subprocess).
Note: In Gauche 0.9.5, we introduced run-process-pipeline
. It is similar
to the current run-pipeline
but returns a list of subprocess objects
instead of a single one. We realized it’s not very convenient, so we
deprecated run-process-pipeline
and replaced it with run-pipeline
.
We still support run-process-pipeline
, but strongly recommend
to move to run-pipeline
as soon as possible.
{gauche.process
}
An object to keep the status of a child process. You can create
the process object by run-process
procedure described below.
The process ports explained in the next section also use process objects.
The <process>
class keeps track of
the child processes spawned by high-level APIs such
as run-process
or open-input-process-port
.
The exit status of such children must be collected by
process-wait
or process-wait-any
calls,
which also do some bookkeeping. Using
the low-level process calls such as sys-wait
or
sys-waitpid
directly will cause inconsistent state.
{gauche.process
}
A condition type mainly used by the process port utility procedures.
Inherits <error>
. This type of condition is thrown when
the high-level process port utilities detect the child process exited
with non-zero status code.
<process-abnormal-exit>
: process ¶A process object.
Note: In Unix terms, exiting a process by calling exit(2)
or
returning from main()
is a normal exit, regardless of the
exit status. Some commands do use non-zero exit status
to tell one of the normal results of execution (such as grep(1)
).
However, large number of commands uses non-zero exit status to
indicate that they couldn’t carry out the required operation,
so we treat them as exceptional situations.
{gauche.process
}
≡ (is-a? obj <process>)
{gauche.process
}
Returns the process ID of the subprocess process.
{gauche.process
}
Returns the command invoked in the subprocess process.
{gauche.process
}
Retrieves one end of a pipe, whose another end is connected
to the process’s input or output, respectively.
name is a symbol given to the redirects argument
of run-process
to distinguish the pipe. See the following
example:
(let1 p (run-process '(command arg) :redirects '((< 3 aux-in) (> 4 aux-out))) (let ([auxin (process-input p 'aux-in)] [auxout (process-output p 'aux-out)]) ;; feed something to the child's input (display 'something auxin) ;; read data from the child's output (read-line auxout) ... ) (process-wait p))
The symbols aux-in
and aux-out
is used to
identify the pipes. Note that process-input
returns
output port, and process-output
returns
input port.
When name is omitted, stdin
is used for process-input
and stdout
is used for process-output
. These are
the names used if child’s stdin and stdout are redirected by
:input :pipe
and :output :pipe
arguments, respectively.
If there’s no pipe with the given name, #f
is returned.
(let* ((process (run-process '("date") :output :pipe)) (line (read-line (process-output process)))) (process-wait process) line) ⇒ "Fri Jun 22 22:22:22 HST 2001"
If process is a result of run-pipeline
,
(process-input process)
and (process-input process 'stdin)
behave slightly differently—they return the pipe connected to the
stdin of the head process of the pipeline, not the
process represented by process (which is the tail of the pipeline).
This allows you to treat the whole pipeline as one entity.
(let1 p (run-pipeline `((cat) (grep "aba")) :input :pipe :output :pipe) (display "banana\nhabana\ntabata\ncabara\n" (process-input p)) ; head of the pipeline (close-port (process-input p)) (process-wait p) (port->string (process-output p))) ⇒ "habana\ntabata\ncabara\n"
If you want to connect one process’s output to another process’s input
without using do-pipeline
/run-pipeline
, you need to have
coroutine or thread that actively reads from one process’s output
and feed it to another process’s input. You can use a plumbing
in control.plumbing
(see control.plumbing
- Plumbing ports) to do so easily. Plumbing also allows
you to “tap” the pipe, i.e. to monitor the data that flows
between processes.
{gauche.process
}
This is equivalent to (process-output process 'stderr)
.
{gauche.process
}
Returns true if process is alive. Note that Gauche can’t
know the subprocess’ status until it is explicitly checked by
process-wait
.
{gauche.process
}
If process is the result of run-pipeline
, this returns
a list of processes that are upstream of process in the pipeline.
If process is not the result of run-pipeline
, this returns
an empty list.
(define p (run-pipeline `((cat) (grep "ho") (wc)) :input :pipe)) p ⇒ #<process 20658 "wc" active> (process-upstreams p) ⇒ (#<process 20656 "cat" active> #<process 20657 "grep" active>)
{gauche.process
}
Returns a list of active processes. The process remains active
until its exit status is explicitly collected by process-wait
.
Once the process’s exit status is collected and its state changed
to inactive, it is removed from the list process-list
returns.
{gauche.process
}
Obtains the exit status of the subprocess process, and stores it
to process’s status slot. The status can be obtained by
process-exit-status
.
This suspends execution until process exits by default. However, if a true value is given to the optional argument nohang, it returns immediately if process hasn’t exit.
If a true value is given to the optional argument
error-on-nonzero-status, and the obtained status code is not zero,
this procedure raises <process-abnormal-exit>
error.
Returns #t
if this call actually obtains the exit status,
or #f
otherwise.
If the process object is created by run-pipeline
(see Running process pipeline), process-wait
waits
all of the subprocesses in the pipeline, not just the last one,
unless true value is given to the nohang argument.
However, error-on-nonzero-status only affects to the status
of process, which represents the last process in the pipeline;
if an other subprocess exits with nonzero status, it is stored in
its respective process objects, but won’t cause a fuss.
If you specify a true value to nohang for the pipelined
process, process-wait
still
probes other subprocesses in the pipeline and updates exit statuses
of terminated ones, but doesn’t wait unterminated subprocesses.
The unterminated subprocesses should be waited individually, or
by process-wait-any
, to collect their exit statuses.
{gauche.process
}
Obtains the exit status of any of the subprocesses created by
run-process
.
Returns a process object whose exit status is collected.
If a true value is given to the optional argument nohang, this procedure
returns #f
immediately even if no child process has exit.
If nohang is omitted or
#f
, this procedure waits for any of children exits.
If there’s no child processes, this procedure immediately returns #f
.
{gauche.process
}
Polls wait status of process periodically, sleeping
interval nanoseconds (default 2e6 ns, i.e. 2ms) between each poll.
It returns as soon as it finds the process has exitted and its status is
retrieved.
If max-wait is given and not #f
, it species maximum duration
in nanoseconds to keep polling. Once the duration expires, the procedure
gives up and return. If it is not given or #f
,
the procedure keeps polling until the process exits.
The continue-test, if given and not #f
, must be a procedure
that takes one argument, poll count. It is called after each unsuccessul
polling (that is, the process hasn’t been exitted), and the argument begins
with 0 and incretemented for each callback. The procedure can return
#f
to give up polling. When it returns a true value, polling continues.
The raise-error? argument is the same as process-wait
; an
error is raised if process’s exit status isn’t zero.
The procedure returns #t
if the process exitted, and #f
if
it has given up. The process’s exit status should be retrieved
with process-exit-status
from process.
{gauche.process
}
Returns exit status of process retrieved by process-wait
.
If this is called before process-wait
is called on process,
the result is undefined.
The meaning of exit status depends on the platform. You need to
use sys-wait-exited?
or sys-wait-signaled?
to
see if it is terminated voluntarily or by a signal, and
use sys-wait-exit-status
or sys-wait-termsig
to extract the exit code or the terminating signal
(see Process management).
{gauche.process
}
Sends a signal signal to the subprocess process.
signal must be an exact integer for signal number.
See Signal, for predefined variables of signals.
{gauche.process
}
Sends SIGKILL, SIGSTOP and SIGCONT to process, respectively.
Tries to terminate process in the gradually escalating means.
The ask argument is, if not #f
, a procedure taking one argument,
the retry count. The ask procedure is responsible to interact with
process in a proper protocol (e.g. send exit
command via
comminucation channel). After ask is executed (the return value
is ignored), the process’s status is monitored, and
process-shutdown
returns #t
as soon as the
process exits. If the process doesn’t exit, the procedure repeats
calling ask up to retry times, with ask-interval nanoseconds
delay inbetween, incrementing the retry count
argument. If ask argument is #f
, this step is skipped.
The default value of ask is #f
, ask-interval is
50e6 ns (50ms), and ask-retry is 1.
If the first step fails, this procedure starts sending signals.
The process’s status is checked every time after a signal is sent.
As soon as the process exists, process-shutdown
returns #t
.
The sequence of signals is specified by signals argument,
and the interval of sending signals is specified by signal-interval
argument. The default value of signals is
(list SIGTERM SIGTERM SIGKILL)
, and singal-interval is
50e6 ns (50ms).
If all the measures fail, #f
is returned.
(Hint: If you wants to keep sending signals until the process really exits, you can pass a circular list to the signals.)
{gauche.process
}
Runs command asynchronously in a subprocess. Returns
two values, an input port which is connected to the stdout of the
running subprocess, and a process object.
Command can be a string, a list of command name and arguments, or a list of lists of command name and arguments.
If it is a string, it is passed to /bin/sh
.
You can use shell metacharacters in this form, such as
environment variable interpolation, globbing, and redirections.
If you create the command line by concatenating strings,
it’s your responsibility to ensure escaping special characters
if you don’t want the shell to interpret them.
The shell-escape-string
function in text.sh
might
be a help (see text.sh
- Shell text utilities).
If command is a list (but not a list of lists),
each element is converted to a
string by x->string
and then passed directly to sys-exec
(the car
of the list is used as both the command path
and the first element of argv, i.e. argv[0]
).
Use this form if you want to avoid the shell from interfering;
i.e. you don’t need to escape special characters.
The subprocess’s stdin is redirected from /dev/null
,
and its stderr shares the calling process’s stderr by default.
You can change these by giving file pathnames to input and
error keyword arguments, respectively.
If command is a list of lists, it creates a command pipeline,
as in run-pipeline
(see Running process pipeline).
Each inner list should consists of a command path, followed by command-line
arguments. They are applied on x->string
before passed to
sys-exec
.
The stdout of the last command is available from the returned input port,
and the stdin of the first command is provided by input keyword
arguments, or /dev/null
by default. The stderr of all the commands
goes to error keyword arguments if given, or shared with
the caller process’s stderr.
You can also give the encoding keyword argument
to specify character encoding of the process output. If it is
other than utf-8
,
open-input-process-port
inserts a character encoding
conversion port.
If encoding is given, the conversion-buffer-size keyword
argument can control the conversion buffer size.
See gauche.charconv
- Character Code Conversion, for
the details of character encoding conversions.
(receive (port process) (open-input-process-port "ls -l Makefile") (begin0 (read-line port) (process-wait process))) ⇒ "-rw-r--r-- 1 shiro users 1013 Jun 22 21:09 Makefile" (receive (port process) (open-input-process-port '(ls -l "Makefile")) (begin0 (read-line port) (process-wait process))) ⇒ "-rw-r--r-- 1 shiro users 1013 Jun 22 21:09 Makefile" (open-input-process-port "command 2>&1") ⇒ ;the port reads both stdout and stderr (open-input-process-port "command 2>&1 1>/dev/null") ⇒ ;the port reads stderr
The exit status of subprocess is not automatically collected.
It is the caller’s responsibility to issue process-wait
,
or the subprocess remains in a zombie state. If it bothers you,
you can use one of the following functions.
{gauche.process
}
Runs command in a subprocess and pipes its stdout
to an input port, then call proc with the port as an argument.
When proc returns, it collects its exit status,
then returns the result proc returned.
The cleanup is done even if proc raises an error.
The keyword argument on-abnormal-exit specifies what happens
when the child process exits with non-zero status code.
It can be either :error
(default), :ignore
,
#f
, or a procedure that takes one argument. If it is :error
,
a <process-abnormal-exit>
error condition is thrown by
non-zero exit status; the process
slot of the condition
holds the process object. If it is :ignore
, nothing is done
for non-zero exit status. If it is #f
, the result of
proc is discarded and #f
is returned.
If it is a procedure, it is called with
a process object; when the procedure returns, call-with-input-process
returns normally.
Note that on-abnormal-exit differs from do-process
(see Running subprocess). We want finer control over the situation
than merely check if the subprocess is succeeded or failed.
The semantics of command and other keyword arguments are the same
as open-input-process-port
above.
(call-with-input-process "ls -l *" (lambda (p) (read-line p)))
{gauche.process
}
Runs command in a subprocess, and calls thunk
with its current input port connected to the command’s stdout.
The command is terminated and its exit status is collected,
after thunk returns or raises an error.
The semantics of command and keyword arguments are the same
as call-with-input-process
above.
(with-input-from-process "ls -l *" read-line)
{gauche.process
}
Runs command
in a subprocess asynchronously. Returns two values,
an output port which is connected to the stdin of the subprocess.
and the process object.
The semantics of command is the same as
open-input-process-port
. The semantics of
encoding and conversion-buffer-size are also the same.
The subprocess’s stdout is redirected to /dev/null
by default,
and its stderr shares the calling process’s stderr.
You can change these by giving file pathnames to output and
error keyword arguments, respectively.
The exit status of the subprocess is not automatically collected.
The caller should call process-wait
on the subprocess
at appropriate time.
{gauche.process
}
Runs command
in a subprocess, and calls proc
with an output port which is connected to the stdin of the command.
The exit status of the command is collected after either proc
returns or raises an error.
The semantics of keyword arguments are the same as
open-output-process-port
, except on-abnormal-exit,
which is the same as described in call-with-input-process
.
(call-with-output-process "/usr/sbin/sendmail" (lambda (out) (display mail-body out)))
{gauche.process
}
Same as call-with-output-process
, except that the
output port which is connected to the stdin of the command
is set to the current output port while executing thunk.
{gauche.process
}
Runs command in a subprocess, and calls proc
with two arguments; the first argument is an input port which
is connected to the command’s stdout, and the second is an output
port connected to the command’s stdin. The error output from
the command is shared by the calling process’s, unless
an alternative pathname is given to the error keyword argument.
The exit status of the command is collected when proc returns or raises an error.
{gauche.process
}
Runs command and collects its output (to stdout) and returns them.
process-output->string
concatenates all the output from command
to one string, replacing any sequence of whitespace characters to
single space. The action is similar to “command substitution”
in shell scripts.
process-output->string-list
collects the output from
command line-by-line and returns the list of them. Newline
characters are stripped.
Internally, command is run by call-with-input-process
,
to which keyword arguments are passed.
If you want to use this just like shell’s command substituion,
specifying :on-abnormal-exit :ignore
is usually what you want.
However, if you try another command when one fails,
:on-abnormal-exit #f
may be suitable, for :ignore
returns empty string when the command fails without producing output.
(Tip: To receive stderr output of the child process in the result as well,
pass :merge
to :error
. See run-process
above
for the details.)
(process-output->string '(uname -smp)) ⇒ "Linux i686 unknown" (process-output->string '(ls)) ⇒ "a.out foo.c foo.c~ foo.o" (process-output->string-list '(ls)) ⇒ ("a.out" "foo.c" "foo.c~" "foo.o") ;; Suppress error message when the-file doesn't exist: (process-output->string-list '(cat "the-file") :error :null) ;; If the-file doesn't exit, want to try another-file: (any (^[flie] (process-output->string-list `(cat ,file) :error :null :on-abnormal-exit #f)) '("the-file" "another-file"))
A connection abstraction to communicate with an external processs.
Inherits <connection>
.
See gauche.connection
- Connection framework, for the details of the connection interface.
This is useful to give an external process to the code that expects connection. For example, instead of direct network connection, you can insert a filter process between remote server and your client code.
Run an external process and returns a connection that’s connected to standard I/O of the process.
You can pass a list of command and its arguments, or
a <process>
object, to the process-or-spec argument.
If you it’s a <process>
object, it’s stdin and stdout
must be connected to pipes. If it is a list, it is passed to run-process
to run a new process.
Shutting down both channels of the connection terminates the process. Most processes that reads from stdin would exits after it reads EOF from input, so we just poll the process exit status for a short period of time. If the process doesn’t exit, we send signals (first SIGTERM, then SIGKILL) to ensure the termination of the process.
Merely closing the connection doesn’t terminate the process, so that the forked process can keep talking to the process.