Utilities
- class linear_operator.utils.StochasticLQ(max_iter=15, num_random_probes=10)[source]
Implements an approximate log determinant calculation for symmetric positive definite matrices using stochastic Lanczos quadrature. For efficient calculation of derivatives, We additionally compute the trace of the inverse using the same probe vector the log determinant was computed with. For more details, see Dong et al. 2017 (in submission).
- to_dense(matrix_shape, eigenvalues, eigenvectors, funcs)[source]
Computes tr(f(A)) for an arbitrary list of functions, where f(A) is equivalent to applying the function elementwise to the eigenvalues of A, i.e., if A = VLambdaV^{T}, then f(A) = Vf(Lambda)V^{T}, where f(Lambda) is applied elementwise. Note that calling this function with a list of functions to apply is significantly more efficient than calling it multiple times with one function – each additional function after the first requires negligible additional computation.
- Args:
matrix_shape (torch.Size()) - size of underlying matrix (not including batch dimensions)
eigenvalues (Tensor n_probes x …batch_shape x k) - batches of eigenvalues from Lanczos tridiag mats
eigenvectors (Tensor n_probes x …batch_shape x k x k) - batches of eigenvectors from ” ” “
- funcs (list of closures) - A list of functions [f_1,…,f_k]. tr(f_i(A)) is computed for each function.
Each function in the closure should expect to take a torch vector of eigenvalues as input and apply the function elementwise. For example, to compute logdet(A) = tr(log(A)), [lambda x: x.log()] would be a reasonable value of funcs.
- Returns:
- results (list of scalars) - The trace of each supplied function applied to the matrix, e.g.,
[tr(f_1(A)),tr(f_2(A)),…,tr(f_k(A))].
- linear_operator.utils.cached(method=None, name=None, ignore_args=False)[source]
A decorator allowing for specifying the name of a cache, allowing it to be modified elsewhere.
- linear_operator.utils.contour_integral_quad(linear_op, rhs, inverse=False, weights=None, shifts=None, max_lanczos_iter=20, num_contour_quadrature=None, shift_offset=0)[source]
Performs \(\mathbf K^{1/2} \mathbf b\) or \(\mathbf K^{-1/2} \mathbf b\) using contour integral quadrature.
- Parameters
linear_op (linear_operator.operators.LinearOperator) – LinearOperator representing \(\mathbf K\)
rhs (torch.Tensor) – Right hand side tensor \(\mathbf b\)
inverse (bool) – (default False) whether to compute \(\mathbf K^{1/2} \mathbf b\) (if False) or mathbf K^{-1/2} mathbf b (if True)
max_lanczos_iter (int) – (default 10) Number of Lanczos iterations to run (to estimate eigenvalues)
num_contour_quadrature (int) – How many quadrature samples to use for approximation. Default is in settings.
- Return type
- Returns
Approximation to \(\mathbf K^{1/2} \mathbf b\) or \(\mathbf K^{-1/2} \mathbf b\).
- linear_operator.utils.linear_cg(matmul_closure, rhs, n_tridiag=0, tolerance=None, eps=1e-10, stop_updating_after=1e-10, max_iter=None, max_tridiag_iter=None, initial_guess=None, preconditioner=None)[source]
Implements the linear conjugate gradients method for (approximately) solving systems of the form
lhs result = rhs
for positive definite and symmetric matrices.
- Args:
matmul_closure - a function which performs a left matrix multiplication with lhs_mat
rhs - the right-hand side of the equation
n_tridiag - returns a tridiagonalization of the first n_tridiag columns of rhs
tolerance - stop the solve when the (average) norm of the residual(s) is less than this
eps - noise to add to prevent division by zero
stop_updating_after - will stop updating a vector after this residual norm is reached
max_iter - the maximum number of CG iterations
max_tridiag_iter - the maximum size of the tridiagonalization matrix
initial_guess - an initial guess at the solution result
precondition_closure - a functions which left-preconditions a supplied vector
- Returns:
result - a solution to the system (if n_tridiag is 0) result, tridiags - a solution to the system, and corresponding tridiagonal matrices (if n_tridiag > 0)
- linear_operator.utils.minres(matmul_closure, rhs, eps=1e-25, shifts=None, value=None, max_iter=None, preconditioner=None)[source]
Perform MINRES to find solutions to \((\mathbf K + \alpha \sigma \mathbf I) \mathbf x = \mathbf b\). Will find solutions for multiple shifts \(\sigma\) at the same time.
- Parameters
matmul_closure (callable) – Function to perform matmul with.
rhs (torch.Tensor) – The vector \(\mathbf b\) to solve against.
shifts (torch.Tensor) – (default None) The shift \(\sigma\) values. If set to None, then \(\sigma=0\).
value (float) – (default None) The multiplicative constant \(\alpha\). If set to None, then \(\alpha=0\).
max_iter (int) – (default None) The maximum number of minres iterations. If set to None, then uses the constant stored in
linear_operator.settings.max_cg_iterations
.
- Return type
- Returns
The solves \(\mathbf x\). The shape will correspond to the size of rhs and shifts.
- linear_operator.utils.stable_pinverse(A)[source]
Compute a pseudoinverse of a matrix. Employs a stabilized QR decomposition.
- Parameters
A (torch.Tensor) –
- Return type
- linear_operator.utils.stable_qr(mat)[source]
performs a QR decomposition on the batched matrix mat. We need to use these functions because of
slow batched QR in pytorch (pytorch/pytorch#22573)
possible singularity in R
- linear_operator.utils.lanczos.lanczos_tridiag_to_diag(t_mat)[source]
Given a num_init_vecs x num_batch x k x k tridiagonal matrix t_mat, returns a num_init_vecs x num_batch x k set of eigenvalues and a num_init_vecs x num_batch x k x k set of eigenvectors.
TODO: make the eigenvalue computations done in batch mode.
- linear_operator.utils.sparse.make_sparse_from_indices_and_values(interp_indices, interp_values, num_rows)[source]
This produces a sparse tensor with a fixed number of non-zero entries in each column.
- Args:
- interp_indices - Tensor (batch_size) x num_cols x n_nonzero_entries
A matrix which has the indices of the nonzero_entries for each column
- interp_values - Tensor (batch_size) x num_cols x n_nonzero_entries
The corresponding values
num_rows - the number of rows in the result matrix
- Returns:
SparseTensor - (batch_size) x num_cols x num_rows