# Functions

LinearOperator objects are designed to work seamlessly with the torch API. For example:

>>> linear_op = linear_operators.operators.ToeplitzLinearOperator(
>>>    torch.tensor([1., 2., 3., 4.])
>>> )
>>> # Represents a Toeplitz matrix:
>>> # [[1., 2., 3., 4.],
>>> #  [2., 1., 2., 3.],
>>> #  [3., 2., 1., 2.],
>>> #  [4., 3., 2., 1.]]
>>> torch.matmul(linear_op, torch.tensor([0.5, 0.1, 0.2, 0.4]))
>>> # Returns: tensor([2.9, 2.7, 2.7, 3.1])

The linear_operator module also includes some functions taht are not implemented as part of Torch. These functions are designed to work on LinearOperator and Tensor objects alike.

Adds an element to the diagonal of the matrix $$\mathbf A$$.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• diag (torch.Tensor) – Diagonal to add

Return type:

Any

Returns:

$$\mathbf A + \text{diag}(\mathbf d)$$, where $$\mathbf A$$ is the linear operator and $$\mathbf d$$ is the diagonal component

Adds jitter (i.e., a small diagonal component) to the matrix this LinearOperator represents. This is equivalent to calling add_diagonal() with a scalar tensor.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• jitter_val (float) – The diagonal component to add

Return type:

Any or torch.Tensor

Returns:

$$\mathbf A + \alpha (\mathbf I)$$, where $$\mathbf A$$ is the linear operator and $$\alpha$$ is jitter_val.

linear_operator.dsmm(sparse_mat, dense_mat)[source]

Performs the (batch) matrix multiplication $$\mathbf{SD}$$ where $$\mathbf S$$ is a sparse matrix and $$\mathbf D$$ is a dense matrix.

Parameters:
• sparse_mat (torch.sparse.HalfTensor or torch.sparse.FloatTensor or torch.sparse.DoubleTensor) – Sparse matrix $$\mathbf S$$ (… x M x N)

• dense_mat (torch.Tensor) – Dense matrix $$\mathbf D$$ (… x N x O)

Return type:

torch.Tensor

Returns:

$$\mathbf S \mathbf D$$ (… x M x N)

linear_operator.diagonalization(input, method=None)[source]

Returns a (usually partial) diagonalization of a symmetric positive definite matrix (or batch of matrices). $$\mathbf A$$. Options are either “lanczos” or “symeig”. “lanczos” runs Lanczos while “symeig” runs LinearOperator.symeig.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• method (str, optional) – Specify the method to use (“lanczos” or “symeig”). The method will be determined based on size if not specified.

Return type:

(torch.Tensor, torch.Tensor or Any)

Returns:

eigenvalues and eigenvectors representing the diagonalization.

Computes an inverse quadratic form (w.r.t self) with several right hand sides, i.e:

$\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right),$

where $$\mathbf A$$ is a positive definite matrix (or batch of matrices) and $$\mathbf R$$ represents the right hand sides (inv_quad_rhs).

$\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right).$
Parameters:
• input (Any or torch.Tensor) – $$\mathbf A$$ - the positive definite matrix (… X N X N)

• inv_quad_rhs (torch.Tensor) – $$\mathbf R$$ - the right hand sides of the inverse quadratic term (… x N x M)

• reduce_inv_quad (bool) – Whether to compute $$\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$ or $$\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$.

Return type:

torch.Tensor

Returns:

The inverse quadratic term. If reduce_inv_quad=True, the inverse quadratic term is of shape (…). Otherwise, it is (… x M).

Calls both inv_quad_logdet() and logdet() on a positive definite matrix (or batch) $$\mathbf A$$. However, calling this method is far more efficient and stable than calling each method independently.

Parameters:
• input (Any or torch.Tensor) – $$\mathbf A$$ - the positive definite matrix (… X N X N)

• inv_quad_rhs (torch.Tensor, optional) – $$\mathbf R$$ - the right hand sides of the inverse quadratic term (… x N x M)

• logdet (bool) – Whether or not to compute the logdet term $$\log \vert \mathbf A \vert$$.

• reduce_inv_quad (bool) – Whether to compute $$\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$ or $$\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$.

Return type:

(torch.Tensor, optional, torch.Tensor, optional)

Returns:

The inverse quadratic term (or None), and the logdet term (or None). If reduce_inv_quad=True, the inverse quadratic term is of shape (…). Otherwise, it is (… x M).

linear_operator.pivoted_cholesky(input, rank, error_tol=None, return_pivots=False)[source]

Performs a partial pivoted Cholesky factorization of a positive definite matrix (or batch of matrices). $$\mathbf L \mathbf L^\top = \mathbf A$$. The partial pivoted Cholesky factor $$\mathbf L \in \mathbb R^{N \times \text{rank}}$$ forms a low rank approximation to the LinearOperator.

The pivots are selected greedily, correspoading to the maximum diagonal element in the residual after each Cholesky iteration. See Harbrecht et al., 2012.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• rank (int) – The size of the partial pivoted Cholesky factor.

• error_tol (float, optional) – Defines an optional stopping criterion. If the residual of the factorization is less than error_tol, then the factorization will exit early. This will result in a $$\leq \text{ rank}$$ factor.

• return_pivots (bool) – Whether or not to return the pivots alongside the partial pivoted Cholesky factor.

Return type:
Returns:

The … x N x rank factor (and optionally the … x N pivots if return_pivots is True).

linear_operator.root_decomposition(input, method=None)[source]

Returns a (usually low-rank) root decomposition linear operator of the positive definite matrix (or batch of matrices) $$\mathbf A$$. This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• method (str, optional) – Which method to use to perform the root decomposition. Choices are: “cholesky”, “lanczos”, “symeig”, “pivoted_cholesky”, or “svd”.

Return type:

Any

Returns:

A tensor $$\mathbf R$$ such that $$\mathbf R \mathbf R^\top \approx \mathbf A$$.

linear_operator.root_inv_decomposition(input, initial_vectors=None, test_vectors=None, method=None)[source]

Returns a (usually low-rank) inverse root decomposition linear operator of the PSD LinearOperator $$\mathbf A$$. This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix.

The root_inv_decomposition is performed using a partial Lanczos tridiagonalization.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• initial_vectors (torch.Tensor, optional) – Vectors used to initialize the Lanczos decomposition. The best initialization vector (determined by test_vectors) will be chosen.

• test_vectors (torch.Tensor, optional) – Vectors used to test the accuracy of the decomposition.

• method (str, optional) – Root decomposition method to use (symeig, diagonalization, lanczos, or cholesky).

Return type:

Any

Returns:

A tensor $$\mathbf R$$ such that $$\mathbf R \mathbf R^\top \approx \mathbf A^{-1}$$.

linear_operator.solve(input, rhs, lhs=None)[source]

Given a positive definite matrix (or batch of matrices) $$\mathbf A$$, computes a linear solve with right hand side $$\mathbf R$$:

$$$\mathbf A^{-1} \mathbf R,$$$

where $$\mathbf R$$ is right_tensor and $$\mathbf A$$ is the LinearOperator.

Note

Unlike torch.linalg.solve(), this function can take an optional left_tensor attribute. If this is supplied linear_operator.solve() computes

$$$\mathbf L \mathbf A^{-1} \mathbf R,$$$

where $$\mathbf L$$ is left_tensor. Supplying this can reduce the number of solver calls required in the backward pass.

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• rhs (torch.Tensor) – $$\mathbf R$$ - the right hand side

• lhs (torch.Tensor, optional) – $$\mathbf L$$ - the left hand side

Return type:

torch.Tensor

Returns:

$$\mathbf A^{-1} \mathbf R$$ or $$\mathbf L \mathbf A^{-1} \mathbf R$$.

linear_operator.sqrt_inv_matmul(input, rhs, lhs=None)[source]

Given a positive definite matrix (or batch of matrices) $$\mathbf A$$ and a right hand size $$\mathbf R$$, computes

$$$\mathbf A^{-1/2} \mathbf R,$$$

If lhs is supplied, computes

$$$\mathbf L \mathbf A^{-1/2} \mathbf R,$$$

where $$\mathbf L$$ is lhs. (Supplying lhs can reduce the number of solver calls required in the backward pass.)

Parameters:
• input (Any or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• rhs (torch.Tensor) – $$\mathbf R$$ - the right hand side

• lhs (torch.Tensor, optional) – $$\mathbf L$$ - the left hand side

Return type:
Returns:

$$\mathbf A^{-1/2} \mathbf R$$ or $$\mathbf L \mathbf A^{-1/2} \mathbf R$$.