Venture can be invoked in two styles: as a standalone language, and as a Python library.
Interactive console:
$ venture
venture[script] > assume x = normal(0, 1)
venture[script] > observe normal(x, 1) = 2
Source file:
$ cat prog.vnts
assume x = normal(0, 1);
observe normal(x, 1) = 2;
$ venture -f prog.vnts
Python library:
from venture.shortcuts import *
v = make_ripl()
v.assume("x", "normal(0, 1)")
v.observe("normal(x, 1)", 2)
Python library (batch invocation):
import venture.shortcuts as s
v = s.Lite().make_ripl()
v.execute_program("""
assume x = normal(0, 1);
observe normal(x, 1) = 2;
""")
The expressive power of the two methods is equivalent, because the standalone language can be extended with plugins and callbacks written in Python, which can then manipulate the Venture system programmatically.
The Venture program is self-documenting as to its invocation pattern. We reproduce here the help messages for representative invocation modes.
$ venture -h
usage: venture [-h] [--version] [-f FILE] [-e EVAL] [-i] [-a] [-L LIBRARY] [--prelude]
A general-purpose probabilistic computing platform.
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-f FILE, --file FILE execute the given file
-e EVAL, --eval EVAL execute the given program
-i, --interactive enter interactive interpreter after completing script file
-a, --abstract-syntax
interpret abstract rather than concrete syntax
-L LIBRARY, --library LIBRARY
load a Python file in the local directory. If the file defines
a __venture_start__ function, call it with the ripl to
initialize the library.
--prelude load model prelude on startup (default: off)
optional modes:
lite (default) VentureScript, tracing models with the Lite backend
puma VentureScript, tracing models with the Puma backend
server Start a VentureScript command server
remote Connect to a running VentureScript command server
Run venture {mode} -h for mode-specific options.
$ venture lite -h
usage: venture lite [-h] [-f FILE] [-e EVAL] [-i] [-a] [-L LIBRARY] [--prelude]
VentureScript, tracing models with the Lite backend
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE execute the given file
-e EVAL, --eval EVAL execute the given program
-i, --interactive enter interactive interpreter after completing script file
-a, --abstract-syntax
interpret abstract rather than concrete syntax
-L LIBRARY, --library LIBRARY
load a Python file in the local directory. If the file defines a __venture_start__
function, call it with the ripl to initialize the library.
--prelude load model prelude on startup (default: off)
$ venture server -h
usage: venture server [-h] [-p PORT] backend
Start a VentureScript server
positional arguments:
backend the backend to trace models in
optional arguments:
-h, --help show this help message and exit
-p PORT, --port PORT the port to listen on (default 8082)
$ venture remote -h
usage: venture remote [-h] [--host HOST] [-p PORT]
Connect to a running VentureScript server
optional arguments:
-h, --help show this help message and exit
--host HOST the host to connect to (default 127.0.0.1)
-p PORT, --port PORT the port to connect on (default 8082)