amisc.component
A Component is an amisc
wrapper around a single discipline model. It manages surrogate construction and optionally
a hierarchy of modeling fidelities that may be available. Concrete component classes all inherit from the base
ComponentSurrogate
class provided here. Components manage an array of BaseInterpolator
objects to form a
multifidelity hierarchy.
Includes:
ComponentSurrogate
: the base class that is fundamental to the adaptive multi-index stochastic collocation strategySparseGridSurrogate
: an AMISC component that manages a hierarchy ofLagrangeInterpolator
objectsAnalyticalSurrogate
: a light wrapper around a single discipline model that does not require surrogate approximation
ComponentSurrogate(x_vars, model, multi_index=None, truth_alpha=(), max_alpha=(), max_beta=(), log_file=None, executor=None, model_args=(), model_kwargs=None)
Bases: ABC
The base multi-index stochastic collocation (MISC) surrogate class for a single discipline component model.
Multi-indices
A multi-index is a tuple of natural numbers, each specifying a level of fidelity. You will frequently see two
multi-indices: alpha
and beta
. The alpha
(or \(\alpha\)) indices specify physical model fidelity and get
passed to the model as an additional argument (e.g. things like discretization level, time step size, etc.).
The beta
(or \(\beta\)) indices specify surrogate refinement level, so typically an indication of the amount of
training data used. Each fidelity index in \(\alpha\) and \(\beta\) increase in refinement from \(0\) up to
max_alpha
and max_beta
. From the surrogate's perspective, the concatenation of \((\alpha, \beta)\) fully
specifies a single fidelity "level". The ComponentSurrogate
forms an approximation of the model by summing
up over many of these concatenated sets of \((\alpha, \beta)\). These lists are stored in a data structure of
list[ tuple[ tuple, tuple ], ...]
. When \(\alpha\) or \(\beta\) are used as keys in a dict
, they are cast to
a Python str
from a tuple
.
ATTRIBUTE | DESCRIPTION |
---|---|
index_set |
the current active set of multi-indices in the MISC approximation
TYPE:
|
candidate_set |
all neighboring multi-indices that are candidates for inclusion in
TYPE:
|
x_vars |
list of variables that define the input domain
TYPE:
|
ydim |
the number of outputs returned by the model
TYPE:
|
_model |
stores a ref to the model or function that is to be approximated, callable as
TYPE:
|
_model_args |
additional arguments to supply to the model
TYPE:
|
_model_kwargs |
additional keyword arguments to supply to the model
TYPE:
|
truth_alpha |
the model fidelity indices to treat as the "ground truth" model
TYPE:
|
max_refine |
the maximum level of refinement for each fidelity index in \((\alpha, \beta)\)
TYPE:
|
surrogates |
keeps track of the
TYPE:
|
costs |
keeps track of total cost associated with adding a single \((\alpha, \beta)\) to the MISC approximation
TYPE:
|
misc_coeff |
the combination technique coefficients for the MISC approximation
TYPE:
|
Construct the MISC surrogate and initialize with any multi-indices passed in.
Model specification
The model is a callable function of the form ret = model(x, *args, **kwargs)
. The return value is a
dictionary of the form ret = {'y': y, 'files': files, 'cost': cost}
. In the return dictionary, you
specify the raw model output y
as an np.ndarray
at a minimum. Optionally, you can specify paths to
output files and the average model cost (in units of seconds of cpu time), and anything else you want.
Warning
If the model has multiple fidelities, then the function signature must be model(x, alpha, *args, **kwargs)
; the first argument after x
will always be the fidelity indices alpha
. The rest of model_args
will
be passed in after (you do not need to include alpha
in model_args
, it is done automatically).
PARAMETER | DESCRIPTION |
---|---|
x_vars |
|
model |
the function to approximate, callable as
TYPE:
|
multi_index |
TYPE:
|
truth_alpha |
specifies the highest model fidelity indices necessary for a "ground truth" comparison
TYPE:
|
max_alpha |
the maximum model refinement indices to allow, defaults to
TYPE:
|
max_beta |
the maximum surrogate refinement indices, defaults to
TYPE:
|
log_file |
specifies a log file (optional)
TYPE:
|
executor |
parallel executor used to add candidate indices in parallel (optional)
TYPE:
|
model_args |
optional args to pass when calling the model
TYPE:
|
model_kwargs |
optional kwargs to pass when calling the model
TYPE:
|
Source code in src/amisc/component.py
activate_index(alpha, beta)
Add a multi-index to the active set and all neighbors to the candidate set.
PARAMETER | DESCRIPTION |
---|---|
alpha |
A multi-index specifying model fidelity
TYPE:
|
beta |
A multi-index specifying surrogate fidelity
TYPE:
|
Source code in src/amisc/component.py
add_surrogate(alpha, beta)
Build a BaseInterpolator
object for a given \((\alpha, \beta)\)
PARAMETER | DESCRIPTION |
---|---|
alpha |
A multi-index specifying model fidelity
TYPE:
|
beta |
A multi-index specifying surrogate fidelity
TYPE:
|
Source code in src/amisc/component.py
init_coarse()
Initialize the coarsest interpolation and add to the active index set
iterate_candidates()
Iterate candidate indices one by one into the active index set.
:yields alpha, beta: the multi-indices of the current candidate that has been moved to active set
Source code in src/amisc/component.py
predict(x, use_model=None, model_dir=None, training=False, index_set=None, ppool=None)
Evaluate the MISC approximation at new points x
.
Note
By default this will predict the MISC surrogate approximation. However, for convenience you can also specify
use_model
to call the underlying function instead.
PARAMETER | DESCRIPTION |
---|---|
x |
TYPE:
|
use_model |
'best'=high-fidelity, 'worst'=low-fidelity, tuple=a specific
TYPE:
|
model_dir |
directory to save output files if
TYPE:
|
training |
if
TYPE:
|
index_set |
a list of concatenated \((\alpha, \beta)\) to override
TYPE:
|
ppool |
a joblib
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
ndarray
|
|
Source code in src/amisc/component.py
grad(x, training=False, index_set=None)
Evaluate the derivative/Jacobian of the MISC approximation at new points x
.
PARAMETER | DESCRIPTION |
---|---|
x |
TYPE:
|
training |
if
TYPE:
|
index_set |
a list of concatenated \((\alpha, \beta)\) to override
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
ndarray
|
|
Source code in src/amisc/component.py
hessian(x, training=False, index_set=None)
Evaluate the Hessian of the MISC approximation at new points x
.
PARAMETER | DESCRIPTION |
---|---|
x |
TYPE:
|
training |
if
TYPE:
|
index_set |
a list of concatenated \((\alpha, \beta)\) to override
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
ndarray
|
|
Source code in src/amisc/component.py
update_misc_coeffs(index_set=None)
Update the combination technique coeffs for MISC using the given index set.
PARAMETER | DESCRIPTION |
---|---|
index_set |
the index set to consider when computing the MISC coefficients, defaults to the active set
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MiscTree
|
the MISC coefficients for the given index set (\(\alpha\) -> \(\beta\) -> coeff) |
Source code in src/amisc/component.py
get_sub_surrogate(alpha, beta)
Get the specific sub-surrogate corresponding to the \((\alpha, \beta)\) fidelity.
PARAMETER | DESCRIPTION |
---|---|
alpha |
A multi-index specifying model fidelity
TYPE:
|
beta |
A multi-index specifying surrogate fidelity
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
BaseInterpolator
|
the corresponding |
Source code in src/amisc/component.py
get_cost(alpha, beta)
Return the total cost (wall time s) required to add \((\alpha, \beta)\) to the MISC approximation.
PARAMETER | DESCRIPTION |
---|---|
alpha |
A multi-index specifying model fidelity
TYPE:
|
beta |
A multi-index specifying surrogate fidelity
TYPE:
|
Source code in src/amisc/component.py
update_input_bds(idx, bds)
Update the bounds of the input variable at the given index.
PARAMETER | DESCRIPTION |
---|---|
idx |
the index of the input variable to update
TYPE:
|
bds |
the new bounds
TYPE:
|
Source code in src/amisc/component.py
save_enabled()
Return whether this model wants to save outputs to file.
Note
You can specify that a model wants to save outputs to file by providing an 'output_dir'
kwarg.
Source code in src/amisc/component.py
is_one_level_refinement(beta_old, beta_new)
staticmethod
Check if a new beta
multi-index is a one-level refinement from a previous beta
.
Example
Refining from (0, 1, 2)
to the new multi-index (1, 1, 2)
is a one-level refinement. But refining to
either (2, 1, 2)
or (1, 2, 2)
are not, since more than one refinement occurs at the same time.
PARAMETER | DESCRIPTION |
---|---|
beta_old |
the starting multi-index
TYPE:
|
beta_new |
the new refined multi-index
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
bool
|
whether |
Source code in src/amisc/component.py
is_downward_closed(indices)
staticmethod
Return if a list of \((\alpha, \beta)\) multi-indices is downward-closed.
MISC approximations require a downward-closed set in order to use the combination-technique formula for the coefficients (as implemented here).
Example
The list [( (0,), (0,) ), ( (1,), (0,) ), ( (1,), (1,) )]
is downward-closed. You can visualize this as
building a stack of cubes: in order to place a cube, all adjacent cubes must be present (does the logo
make sense now?).
PARAMETER | DESCRIPTION |
---|---|
indices |
list() of (
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
bool
|
whether the set of indices is downward-closed |
Source code in src/amisc/component.py
build_interpolator(alpha, beta)
abstractmethod
Return a BaseInterpolator
object and new refinement points for a given \((\alpha, \beta)\) multi-index.
PARAMETER | DESCRIPTION |
---|---|
alpha |
A multi-index specifying model fidelity
TYPE:
|
beta |
A multi-index specifying surrogate fidelity
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
InterpResults
|
|
Source code in src/amisc/component.py
update_interpolator(x_new_idx, x_new, interp)
abstractmethod
Secondary method to actually compute and save model evaluations within the interpolator.
Note
This distinction with build_interpolator
was necessary to separately construct the interpolator and be
able to evaluate the model at the new interpolation points. You can see that parallel_add_candidates
uses this distinction to compute the model in parallel on MPI workers, for example.
PARAMETER | DESCRIPTION |
---|---|
x_new_idx |
list of new grid point indices
TYPE:
|
x_new |
TYPE:
|
interp |
the
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
the cost (in wall time seconds) required to add this |
Source code in src/amisc/component.py
parallel_add_candidates(candidates, executor)
abstractmethod
Defines a function to handle adding candidate indices in parallel.
Note
While build_interpolator
can make changes to 'self', these changes will not be saved in the master task
if running in parallel over MPI workers, for example. This method is a workaround so that all required
mutable changes to 'self' are made in the master task, before distributing tasks to parallel workers
using this method. You can pass if you don't plan to add candidates in parallel.
PARAMETER | DESCRIPTION |
---|---|
candidates |
list of [(alpha, beta),...] multi-indices
TYPE:
|
executor |
the executor used to iterate candidates in parallel
TYPE:
|
Source code in src/amisc/component.py
SparseGridSurrogate(*args, **kwargs)
Bases: ComponentSurrogate
Concrete MISC surrogate class that maintains a sparse grid composed of smaller tensor-product grids.
Note
MISC itself can be thought of as an extension to the well-known sparse grid technique, so this class
readily integrates with the MISC implementation in ComponentSurrogate
. Sparse grids limit the curse
of dimensionality up to about dim = 10-15
for the input space (which would otherwise be infeasible with a
normal full tensor-product grid of the same size).
About points in a sparse grid
A sparse grid approximates a full tensor-product grid \((N_1, N_2, ..., N_d)\), where \(N_i\) is the number of grid
points along dimension \(i\), for a \(d\)-dimensional space. Each point is uniquely identified in the sparse grid
by a list of indices \((j_1, j_2, ..., j_d)\), where \(j_i = 0 ... N_i\). We refer to this unique identifier as a
"grid coordinate". In the HashSG
data structure, we use a str(tuple(coord))
representation to uniquely
identify the coordinate in a hash DS like Python's dict
.
ATTRIBUTE | DESCRIPTION |
---|---|
HashSG |
a type alias for the hash storage of the sparse grid data (a tree-like DS using dicts)
TYPE:
|
curr_max_beta |
the current maximum \(\beta\) refinement indices in the sparse grid (for each \(\alpha\))
TYPE:
|
x_grids |
maps \(\alpha\) indices to a list of 1d grids corresponding to
TYPE:
|
xi_map |
the sparse grid interpolation points
TYPE:
|
yi_map |
the function values at all sparse grid points
TYPE:
|
yi_nan_map |
imputed function values to use when
TYPE:
|
yi_files |
optional filenames corresponding to the sparse grid
TYPE:
|
Source code in src/amisc/component.py
predict(x, use_model=None, model_dir=None, training=False, index_set=None, ppool=None)
Need to override super()
to allow passing in interpolation grids xi
and yi
.
Source code in src/amisc/component.py
grad(x, training=False, index_set=None)
Need to override super()
to allow passing in interpolation grids xi
and yi
.
Source code in src/amisc/component.py
hessian(x, training=False, index_set=None)
Need to override super()
to allow passing in interpolation grids xi
and yi
.
Source code in src/amisc/component.py
get_tensor_grid(alpha, beta, update_nan=True)
Construct the xi/yi
sub tensor-product grids for a given \((\alpha, \beta)\) multi-index.
PARAMETER | DESCRIPTION |
---|---|
alpha |
model fidelity multi-index
TYPE:
|
beta |
surrogate fidelity multi-index
TYPE:
|
update_nan |
try to fill
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
tuple[ndarray, ndarray]
|
|
Source code in src/amisc/component.py
get_training_data()
Grab all x,y
training data stored in the sparse grid for each model fidelity level \(\alpha\).
RETURNS | DESCRIPTION |
---|---|
tuple[dict[str:ndarray], dict[str:ndarray]]
|
|
Source code in src/amisc/component.py
update_yi(alpha, beta, yi_dict)
Helper method to update yi
values, accounting for possible nans
by regression imputation.
PARAMETER | DESCRIPTION |
---|---|
alpha |
the model fidelity indices
TYPE:
|
beta |
the surrogate fidelity indices
TYPE:
|
yi_dict |
a
TYPE:
|
Source code in src/amisc/component.py
get_sub_surrogate(alpha, beta, include_grid=False)
Get the specific sub-surrogate corresponding to the \((\alpha, \beta)\) fidelity.
PARAMETER | DESCRIPTION |
---|---|
alpha |
A multi-index specifying model fidelity
TYPE:
|
beta |
A multi-index specifying surrogate fidelity
TYPE:
|
include_grid |
whether to add the
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
BaseInterpolator
|
the |
Source code in src/amisc/component.py
build_interpolator(alpha, beta)
Abstract method implementation for constructing the tensor-product grid interpolator.
Source code in src/amisc/component.py
update_interpolator(x_new_idx, x_new, interp)
Awkward solution, I know, but actually compute and save the model evaluations here.
Source code in src/amisc/component.py
parallel_add_candidates(candidates, executor)
Work-around to make sure mutable instance variable changes are made before/after splitting tasks using this method over parallel (potentially MPI) workers. You can pass if you are not interested in such parallel ideas.
Warning
MPI workers cannot save changes to self
so this method should only distribute static tasks to the workers.
PARAMETER | DESCRIPTION |
---|---|
candidates |
list of [(alpha, beta),...] multi-indices
TYPE:
|
executor |
the executor used to iterate candidates in parallel
TYPE:
|
Source code in src/amisc/component.py
AnalyticalSurrogate(x_vars, model, *args, **kwargs)
Bases: ComponentSurrogate
Concrete "surrogate" class that just uses the analytical model (i.e. bypasses surrogate evaluation).
Initializes a stand-in ComponentSurrogate
with all unnecessary fields set to empty.
Warning
This overwrites anything passed in for truth_alpha
, max_alpha
, max_beta
, or multi_index
since
these are not used for an analytical model.
Source code in src/amisc/component.py
predict(x, **kwargs)
Evaluate the analytical model at points x
, ignore extra **kwargs
passed in.
PARAMETER | DESCRIPTION |
---|---|
x |
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
ndarray
|
|
Source code in src/amisc/component.py
grad(x, training=False, index_set=None)
Use auto-diff to compute derivative of an analytical model. Model must be implemented with numpy
.
Not implemented yet
Hypothetically, auto-diff libraries like jax
could be used to write a generic gradient function here for
analytical models implemented directly in Python/numpy. But there are a lot of quirks that should be worked
out first.
Source code in src/amisc/component.py
hessian(x, training=False, index_set=None)
Use auto-diff to compute derivative of an analytical model. Model must be implemented with numpy
.
Not implemented yet
Hypothetically, auto-diff libraries like jax
could be used to write a generic Hessian function here for
analytical models implemented directly in Python/numpy. But there are a lot of quirks that should be worked
out first.