# 5. lib — Helper functions, parameters, and C extensions¶

## 5.1. lib.parameters¶

PySCF environment variables are defined in this module.

### 5.1.1. Scratch directory¶

The PySCF scratch directory is specified by TMPDIR. Its default value is the same to the system-wide environment variable TMPDIR. It can be overwritten by the environment variable PYSCF_TMPDIR. Another place to set TMPDIR is the global configuration file (see Global configurations).

### 5.1.2. Maximum memory¶

The variable MAX_MEMORY defines the maximum memory that PySCF can be used in the calculation. Its unit is MB. The default value is 4000 MB. It can be overwritten by the system-wide environment variable PYSCF_MAX_MEMORY. MAX_MEMORY can also be set in Global configurations file.

Note

Some calculations may exceed the max_memory limit, especially when the attribute Mole.incore_anyway was set.

## 5.2. lib.logger¶

Logging system

### 5.2.1. Log level¶

 Level number DEBUG4 9 DEBUG3 8 DEBUG2 7 DEBUG1 6 DEBUG 5 INFO 4 NOTE 3 WARN 2 ERROR 1 QUIET 0

Large value means more noise in the output file.

Note

Error and warning messages are written to stderr.

Each Logger object has its own output destination and verbose level. So multiple Logger objects can be created to manage the message system without affecting each other. The methods provided by Logger class has the direct connection to the log level. E.g. info() print messages if the verbose level >= 4 (INFO):

>>> import sys
>>> from pyscf import lib
>>> log = lib.logger.Logger(sys.stdout, 4)
>>> log.info('info level')
info level
>>> log.verbose = 3
>>> log.info('info level')
>>> log.note('note level')
note level


### 5.2.2. timer¶

Logger object provides timer method for timing. Set TIMER_LEVEL to control at which level the timing information should be output. It is 5 (DEBUG) by default.

>>> import sys, time
>>> from pyscf import lib
>>> log = lib.logger.Logger(sys.stdout, 4)
>>> t0 = time.clock()
>>> log.timer('test', t0)
>>> lib.logger.TIMER_LEVEL = 4
>>> log.timer('test', t0)
CPU time for test      0.00 sec


### 5.2.3. Logger object¶

class lib.logger.Logger(stdout=<open file '<stdout>', mode 'w'>, verbose=3)[source]
Attributes:
stdout
: file object or sys.stdout
The file to dump output message.
verbose
: int
Large value means more noise in the output file.
lib.logger.new_logger(rec=None, verbose=None)[source]

Create and return a Logger object

Args:

rec : An object which carries the attributes stdout and verbose

verbose
: a Logger object, or integer or None
The verbose level. If verbose is a Logger object, the Logger object is returned. If verbose is not specified (None), rec.verbose will be used in the new Logger object.

## 5.3. numpy extensions¶

Extension to numpy and scipy

lib.numpy_helper.asarray(a, dtype=None, order=None)[source]

Convert a list of N-dim arrays to a (N+1) dim array. It is equivalent to numpy.asarray function.

lib.numpy_helper.cartesian_prod(arrays, out=None)[source]

Generate a cartesian product of input arrays. http://stackoverflow.com/questions/1208118/using-numpy-to-build-an-array-of-all-combinations-of-two-arrays

Args:
arrays
: list of array-like
1-D arrays to form the cartesian product of.
out
: ndarray
Array to place the cartesian product in.
Returns:
out
: ndarray
2-D array of shape (M, len(arrays)) containing cartesian products formed of input arrays.

Examples:

>>> cartesian_prod(([1, 2, 3], [4, 5], [6, 7]))
array([[1, 4, 6],
[1, 4, 7],
[1, 5, 6],
[1, 5, 7],
[2, 4, 6],
[2, 4, 7],
[2, 5, 6],
[2, 5, 7],
[3, 4, 6],
[3, 4, 7],
[3, 5, 6],
[3, 5, 7]])

lib.numpy_helper.cond(x, p=None)[source]

Compute the condition number

lib.numpy_helper.condense(opname, a, locs)[source]
nd = loc[-1]
out = numpy.empty((nd,nd))
for i,i0 in enumerate(loc):
i1 = loc[i+1]
for j,j0 in enumerate(loc):
j1 = loc[j+1]
out[i,j] = op(a[i0:i1,j0:j1])
return out

lib.numpy_helper.ddot(a, b, alpha=1, c=None, beta=0)[source]

Matrix-matrix multiplication for double precision arrays

lib.numpy_helper.direct_sum(subscripts, *operands)[source]

Apply the summation over many operands with the einsum fashion.

Examples:

>>> a = numpy.random((6,5))
>>> b = numpy.random((4,3,2))
>>> direct_sum('ij,klm->ijklm', a, b).shape
(6, 5, 4, 3, 2)
>>> direct_sum('ij,klm', a, b).shape
(6, 5, 4, 3, 2)
>>> direct_sum('i,j,klm->mjlik', a[0], a[:,0], b).shape
(2, 6, 3, 5, 4)
>>> direct_sum('ij-klm->ijklm', a, b).shape
(6, 5, 4, 3, 2)
>>> direct_sum('ij+klm', a, b).shape
(6, 5, 4, 3, 2)
>>> direct_sum('-i-j+klm->mjlik', a[0], a[:,0], b).shape
(2, 6, 3, 5, 4)
>>> c = numpy.random((3,5))
>>> z = direct_sum('ik+jk->kij', a, c).shape  # This is slow
>>> abs(a.T.reshape(5,6,1) + c.reshape(5,1,3) - z).sum()
0.0

lib.numpy_helper.einsum(subscripts, *tensors, **kwargs)[source]

Perform a more efficient einsum via reshaping to a matrix multiply.

Current differences compared to numpy.einsum: This assumes that each repeated index is actually summed (i.e. no ‘i,i->i’) and appears only twice (i.e. no ‘ij,ik,il->jkl’). The output indices must be explicitly specified (i.e. ‘ij,j->i’ and not ‘ij,j’).

lib.numpy_helper.expm(a)[source]

Equivalent to scipy.linalg.expm

lib.numpy_helper.frompointer(pointer, count, dtype=<type 'float'>)[source]

Interpret a buffer that the pointer refers to as a 1-dimensional array.

Args:
pointer
: int or ctypes pointer
count
: int
dtype
: data-type, optional
Data-type of the returned array; default: float.

Examples:

>>> s = numpy.ones(3, dtype=numpy.int32)
>>> ptr = s.ctypes.data
>>> frompointer(ptr, count=6, dtype=numpy.int16)
[1, 0, 1, 0, 1, 0]

lib.numpy_helper.hermi_sum(a, axes=None, hermi=1, inplace=False, out=None)[source]

Computing a + a.T.conj() with better memory efficiency

Examples:

>>> transpose_sum(numpy.arange(4.).reshape(2,2))
[[ 0.  3.]
[ 3.  6.]]

lib.numpy_helper.hermi_triu(mat, hermi=1, inplace=True)[source]

Use the elements of the lower triangular part to fill the upper triangular part.

Kwargs:

filltriu : int

1 (default) return a hermitian matrix
2 return an anti-hermitian matrix

Examples:

>>> unpack_row(numpy.arange(9.).reshape(3,3), 1)
[[ 0.  3.  6.]
[ 3.  4.  7.]
[ 6.  7.  8.]]
>>> unpack_row(numpy.arange(9.).reshape(3,3), 2)
[[ 0. -3. -6.]
[ 3.  4. -7.]
[ 6.  7.  8.]]

lib.numpy_helper.pack_tril(mat, axis=-1, out=None)[source]

flatten the lower triangular part of a matrix. Given mat, it returns mat[...,numpy.tril_indices(mat.shape[0])]

Examples:

>>> pack_tril(numpy.arange(9).reshape(3,3))
[0 3 4 6 7 8]

lib.numpy_helper.solve_lineq_by_SVD(a, b)[source]

Solving a * x = b. If a is a singular matrix, its small SVD values are neglected.

lib.numpy_helper.tag_array(a, **kwargs)[source]

Attach attributes to numpy ndarray. The attribute name and value are obtained from the keyword arguments.

lib.numpy_helper.take_2d(a, idx, idy, out=None)[source]

Equivalent to a[idx[:,None],idy] for a 2D array.

Examples:

>>> out = numpy.arange(9.).reshape(3,3)
>>> take_2d(a, [0,2], [0,2])
[[ 0.  2.]
[ 6.  8.]]

lib.numpy_helper.takebak_2d(out, a, idx, idy)[source]

Reverse operation of take_2d. Equivalent to out[idx[:,None],idy] += a for a 2D array.

Examples:

>>> out = numpy.zeros((3,3))
>>> takebak_2d(out, numpy.ones((2,2)), [0,2], [0,2])
[[ 1.  0.  1.]
[ 0.  0.  0.]
[ 1.  0.  1.]]

lib.numpy_helper.transpose(a, axes=None, inplace=False, out=None)[source]

Transposing an array with better memory efficiency

Examples:

>>> transpose(numpy.ones((3,2)))
[[ 1.  1.  1.]
[ 1.  1.  1.]]

lib.numpy_helper.transpose_sum(a, inplace=False, out=None)[source]

Computing a + a.T with better memory efficiency

Examples:

>>> transpose_sum(numpy.arange(4.).reshape(2,2))
[[ 0.  3.]
[ 3.  6.]]

lib.numpy_helper.unpack_row(tril, row_id)[source]

Extract one row of the lower triangular part of a matrix. It is equivalent to unpack_tril(a)[row_id]

Examples:

>>> unpack_row(numpy.arange(6.), 0)
[ 0. 1. 3.]
>>> unpack_tril(numpy.arange(6.))[0]
[ 0. 1. 3.]

lib.numpy_helper.unpack_tril(tril, filltriu=1, axis=-1, out=None)[source]

Reverse operation of pack_tril.

Kwargs:

filltriu : int

0 Do not fill the upper triangular part, random number may appear in the upper triangular part
1 (default) Transpose the lower triangular part to fill the upper triangular part
2 Similar to filltriu=1, negative of the lower triangular part is assign to the upper triangular part to make the matrix anti-hermitian

Examples:

>>> unpack_tril(numpy.arange(6.))
[[ 0. 1. 3.]
[ 1. 2. 4.]
[ 3. 4. 5.]]
>>> unpack_tril(numpy.arange(6.), 0)
[[ 0. 0. 0.]
[ 1. 2. 0.]
[ 3. 4. 5.]]
>>> unpack_tril(numpy.arange(6.), 2)
[[ 0. -1. -3.]
[ 1.  2. -4.]
[ 3.  4.  5.]]

lib.numpy_helper.zdot(a, b, alpha=1, c=None, beta=0)[source]

Matrix-matrix multiplication for double complex arrays

## 5.4. scipy extensions¶

Extension to scipy.linalg module

lib.linalg_helper.cho_solve(a, b)[source]

Solve ax = b, where a is hermitian matrix

lib.linalg_helper.davidson(aop, x0, precond, tol=1e-12, max_cycle=50, max_space=12, lindep=1e-14, max_memory=2000, dot=<built-in function dot>, callback=None, nroots=1, lessio=False, pick=None, verbose=2, follow_state=False)[source]

Davidson diagonalization method to solve a c = e c. Ref [1] E.R. Davidson, J. Comput. Phys. 17 (1), 87-94 (1975). [2] http://people.inf.ethz.ch/arbenz/ewp/Lnotes/chapter11.pdf

Args:
aop
: function(x) => array_like_x
aop(x) to mimic the matrix vector multiplication $$\sum_{j}a_{ij}*x_j$$. The argument is a 1D array. The returned value is a 1D array.
x0
: 1D array or a list of 1D array
Initial guess. The initial guess vector(s) are just used as the initial subspace bases. If the subspace is smaller than “nroots”, eg 10 roots and one initial guess, all eigenvectors are chosen as the eigenvectors during the iterations. The first iteration has one eigenvector, the next iteration has two, the third iterastion has 4, ..., until the subspace size > nroots.
precond
: diagonal elements of the matrix or function(dx, e, x0) => array_like_dx
Preconditioner to generate new trial vector. The argument dx is a residual vector a*x0-e*x0; e is the current eigenvalue; x0 is the current eigenvector.
Kwargs:
tol
: float
Convergence tolerance.
max_cycle
: int
max number of iterations.
max_space
: int
space size to hold trial vectors.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
max_memory
: int or float
Allowed memory in MB.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
nroots
: int
Number of eigenvalues to be computed. When nroots > 1, it affects the shape of the return value
lessio
: bool
How to compute a*x0 for current eigenvector x0. There are two ways to compute a*x0. One is to assemble the existed a*x. The other is to call aop(x0). The default is the first method which needs more IO and less computational cost. When IO is slow, the second method can be considered.
pick
: function(w,v,nroots) => (e[idx], w[:,idx], idx)
Function to filter eigenvalues and eigenvectors.
follow_state
: bool
If the solution dramatically changes in two iterations, clean the subspace and restart the iteration with the old solution. It can help to improve numerical stability. Default is False.
Returns:
e
: float or list of floats
Eigenvalue. By default it’s one float number. If nroots > 1, it is a list of floats for the lowest nroots eigenvalues.
c
: 1D array or list of 1D arrays
Eigenvector. By default it’s a 1D array. If nroots > 1, it is a list of arrays for the lowest nroots eigenvectors.

Examples:

>>> from pyscf import lib
>>> a = numpy.random.random((10,10))
>>> a = a + a.T
>>> aop = lambda x: numpy.dot(a,x)
>>> precond = lambda dx, e, x0: dx/(a.diagonal()-e)
>>> x0 = a[0]
>>> e, c = lib.davidson(aop, x0, precond)

lib.linalg_helper.davidson1(aop, x0, precond, tol=1e-12, max_cycle=50, max_space=12, lindep=1e-14, max_memory=2000, dot=<built-in function dot>, callback=None, nroots=1, lessio=False, pick=None, verbose=2, follow_state=False, tol_residual=None)[source]

Davidson diagonalization method to solve a c = e c. Ref [1] E.R. Davidson, J. Comput. Phys. 17 (1), 87-94 (1975). [2] http://people.inf.ethz.ch/arbenz/ewp/Lnotes/chapter11.pdf

Args:
aop
: function([x]) => [array_like_x]
Matrix vector multiplication $$y_{ki} = \sum_{j}a_{ij}*x_{jk}$$.
x0
: 1D array or a list of 1D arrays or a function to generate x0 array(s)
Initial guess. The initial guess vector(s) are just used as the initial subspace bases. If the subspace is smaller than “nroots”, eg 10 roots and one initial guess, all eigenvectors are chosen as the eigenvectors during the iterations. The first iteration has one eigenvector, the next iteration has two, the third iterastion has 4, ..., until the subspace size > nroots.
precond
: diagonal elements of the matrix or function(dx, e, x0) => array_like_dx
Preconditioner to generate new trial vector. The argument dx is a residual vector a*x0-e*x0; e is the current eigenvalue; x0 is the current eigenvector.
Kwargs:
tol
: float
Convergence tolerance.
max_cycle
: int
max number of iterations.
max_space
: int
space size to hold trial vectors.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
max_memory
: int or float
Allowed memory in MB.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
nroots
: int
Number of eigenvalues to be computed. When nroots > 1, it affects the shape of the return value
lessio
: bool
How to compute a*x0 for current eigenvector x0. There are two ways to compute a*x0. One is to assemble the existed a*x. The other is to call aop(x0). The default is the first method which needs more IO and less computational cost. When IO is slow, the second method can be considered.
pick
: function(w,v,nroots) => (e[idx], w[:,idx], idx)
Function to filter eigenvalues and eigenvectors.
follow_state
: bool
If the solution dramatically changes in two iterations, clean the subspace and restart the iteration with the old solution. It can help to improve numerical stability. Default is False.
Returns:
conv
: bool
Converged or not
e
: list of floats
The lowest nroots eigenvalues.
c
: list of 1D arrays
The lowest nroots eigenvectors.

Examples:

>>> from pyscf import lib
>>> a = numpy.random.random((10,10))
>>> a = a + a.T
>>> aop = lambda xs: [numpy.dot(a,x) for x in xs]
>>> precond = lambda dx, e, x0: dx/(a.diagonal()-e)
>>> x0 = a[0]
>>> e, c = lib.davidson(aop, x0, precond, nroots=2)
>>> len(e)
2

lib.linalg_helper.davidson_nosym(aop, x0, precond, tol=1e-12, max_cycle=50, max_space=12, lindep=1e-14, max_memory=2000, dot=<built-in function dot>, callback=None, nroots=1, lessio=False, left=False, pick=<function pick_real_eigs>, verbose=2, follow_state=False)

Davidson diagonalization to solve the non-symmetric eigenvalue problem

Args:
aop
: function([x]) => [array_like_x]
Matrix vector multiplication $$y_{ki} = \sum_{j}a_{ij}*x_{jk}$$.
x0
: 1D array or a list of 1D array
Initial guess. The initial guess vector(s) are just used as the initial subspace bases. If the subspace is smaller than “nroots”, eg 10 roots and one initial guess, all eigenvectors are chosen as the eigenvectors during the iterations. The first iteration has one eigenvector, the next iteration has two, the third iterastion has 4, ..., until the subspace size > nroots.
precond
: diagonal elements of the matrix or function(dx, e, x0) => array_like_dx
Preconditioner to generate new trial vector. The argument dx is a residual vector a*x0-e*x0; e is the current eigenvalue; x0 is the current eigenvector.
Kwargs:
tol
: float
Convergence tolerance.
max_cycle
: int
max number of iterations.
max_space
: int
space size to hold trial vectors.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
max_memory
: int or float
Allowed memory in MB.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
nroots
: int
Number of eigenvalues to be computed. When nroots > 1, it affects the shape of the return value
lessio
: bool
How to compute a*x0 for current eigenvector x0. There are two ways to compute a*x0. One is to assemble the existed a*x. The other is to call aop(x0). The default is the first method which needs more IO and less computational cost. When IO is slow, the second method can be considered.
left
: bool
Whether to calculate and return left eigenvectors. Default is False.
pick
: function(w,v,nroots) => (e[idx], w[:,idx], idx)
Function to filter eigenvalues and eigenvectors.
follow_state
: bool
If the solution dramatically changes in two iterations, clean the subspace and restart the iteration with the old solution. It can help to improve numerical stability. Default is False.
Returns:
conv
: bool
Converged or not
e
: list of eigenvalues
The eigenvalues can be sorted real or complex, depending on the return value of pick function.
vl
: list of 1D arrays
Left eigenvectors. Only returned if left=True.
c
: list of 1D arrays
Right eigenvectors.

Examples:

>>> from pyscf import lib
>>> a = numpy.random.random((10,10))
>>> a = a
>>> aop = lambda xs: [numpy.dot(a,x) for x in xs]
>>> precond = lambda dx, e, x0: dx/(a.diagonal()-e)
>>> x0 = a[0]
>>> e, vl, vr = lib.davidson(aop, x0, precond, nroots=2, left=True)
>>> len(e)
2

lib.linalg_helper.dgeev(abop, x0, precond, type=1, tol=1e-12, max_cycle=50, max_space=12, lindep=1e-14, max_memory=2000, dot=<built-in function dot>, callback=None, nroots=1, lessio=False, verbose=2)[source]

Davidson diagonalization method to solve A c = e B c.

Args:
abop
: function(x) => (array_like_x, array_like_x)
abop applies two matrix vector multiplications and returns tuple (Ax, Bx)
x0
: 1D array
Initial guess
precond
: diagonal elements of the matrix or function(dx, e, x0) => array_like_dx
Preconditioner to generate new trial vector. The argument dx is a residual vector a*x0-e*x0; e is the current eigenvalue; x0 is the current eigenvector.
Kwargs:
tol
: float
Convergence tolerance.
max_cycle
: int
max number of iterations.
max_space
: int
space size to hold trial vectors.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
max_memory
: int or float
Allowed memory in MB.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
nroots
: int
Number of eigenvalues to be computed. When nroots > 1, it affects the shape of the return value
lessio
: bool
How to compute a*x0 for current eigenvector x0. There are two ways to compute a*x0. One is to assemble the existed a*x. The other is to call aop(x0). The default is the first method which needs more IO and less computational cost. When IO is slow, the second method can be considered.
Returns:
e
: list of floats
The lowest nroots eigenvalues.
c
: list of 1D arrays
The lowest nroots eigenvectors.
lib.linalg_helper.dgeev1(abop, x0, precond, type=1, tol=1e-12, max_cycle=50, max_space=12, lindep=1e-14, max_memory=2000, dot=<built-in function dot>, callback=None, nroots=1, lessio=False, verbose=2, tol_residual=None)[source]

Davidson diagonalization method to solve A c = e B c.

Args:
abop
: function([x]) => ([array_like_x], [array_like_x])
abop applies two matrix vector multiplications and returns tuple (Ax, Bx)
x0
: 1D array
Initial guess
precond
: diagonal elements of the matrix or function(dx, e, x0) => array_like_dx
Preconditioner to generate new trial vector. The argument dx is a residual vector a*x0-e*x0; e is the current eigenvalue; x0 is the current eigenvector.
Kwargs:
tol
: float
Convergence tolerance.
max_cycle
: int
max number of iterations.
max_space
: int
space size to hold trial vectors.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
max_memory
: int or float
Allowed memory in MB.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
nroots
: int
Number of eigenvalues to be computed. When nroots > 1, it affects the shape of the return value
lessio
: bool
How to compute a*x0 for current eigenvector x0. There are two ways to compute a*x0. One is to assemble the existed a*x. The other is to call aop(x0). The default is the first method which needs more IO and less computational cost. When IO is slow, the second method can be considered.
Returns:
conv
: bool
Converged or not
e
: list of floats
The lowest nroots eigenvalues.
c
: list of 1D arrays
The lowest nroots eigenvectors.
lib.linalg_helper.dsolve(aop, b, precond, tol=1e-12, max_cycle=30, dot=<built-in function dot>, lindep=1e-15, verbose=0, tol_residual=None)[source]

Davidson iteration to solve linear equation. It works bad.

lib.linalg_helper.eig(aop, x0, precond, tol=1e-12, max_cycle=50, max_space=12, lindep=1e-14, max_memory=2000, dot=<built-in function dot>, callback=None, nroots=1, lessio=False, left=False, pick=<function pick_real_eigs>, verbose=2, follow_state=False)[source]

Davidson diagonalization to solve the non-symmetric eigenvalue problem

Args:
aop
: function([x]) => [array_like_x]
Matrix vector multiplication $$y_{ki} = \sum_{j}a_{ij}*x_{jk}$$.
x0
: 1D array or a list of 1D array
Initial guess. The initial guess vector(s) are just used as the initial subspace bases. If the subspace is smaller than “nroots”, eg 10 roots and one initial guess, all eigenvectors are chosen as the eigenvectors during the iterations. The first iteration has one eigenvector, the next iteration has two, the third iterastion has 4, ..., until the subspace size > nroots.
precond
: diagonal elements of the matrix or function(dx, e, x0) => array_like_dx
Preconditioner to generate new trial vector. The argument dx is a residual vector a*x0-e*x0; e is the current eigenvalue; x0 is the current eigenvector.
Kwargs:
tol
: float
Convergence tolerance.
max_cycle
: int
max number of iterations.
max_space
: int
space size to hold trial vectors.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
max_memory
: int or float
Allowed memory in MB.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
nroots
: int
Number of eigenvalues to be computed. When nroots > 1, it affects the shape of the return value
lessio
: bool
How to compute a*x0 for current eigenvector x0. There are two ways to compute a*x0. One is to assemble the existed a*x. The other is to call aop(x0). The default is the first method which needs more IO and less computational cost. When IO is slow, the second method can be considered.
left
: bool
Whether to calculate and return left eigenvectors. Default is False.
pick
: function(w,v,nroots) => (e[idx], w[:,idx], idx)
Function to filter eigenvalues and eigenvectors.
follow_state
: bool
If the solution dramatically changes in two iterations, clean the subspace and restart the iteration with the old solution. It can help to improve numerical stability. Default is False.
Returns:
conv
: bool
Converged or not
e
: list of eigenvalues
The eigenvalues can be sorted real or complex, depending on the return value of pick function.
vl
: list of 1D arrays
Left eigenvectors. Only returned if left=True.
c
: list of 1D arrays
Right eigenvectors.

Examples:

>>> from pyscf import lib
>>> a = numpy.random.random((10,10))
>>> a = a
>>> aop = lambda xs: [numpy.dot(a,x) for x in xs]
>>> precond = lambda dx, e, x0: dx/(a.diagonal()-e)
>>> x0 = a[0]
>>> e, vl, vr = lib.davidson(aop, x0, precond, nroots=2, left=True)
>>> len(e)
2

lib.linalg_helper.eigh_by_blocks(h, s=None, labels=None)[source]

Solve an ordinary or generalized eigenvalue problem for diagonal blocks. The diagonal blocks are extracted based on the given basis “labels”. The rows and columns which have the same labels are put in the same block. One common scenario one needs the block-wise diagonalization is to diagonalize the matrix in symmetry adapted basis, in which “labels” is the irreps of each basis.

Args:
h, s
: 2D array
Complex Hermitian or real symmetric matrix.
Kwargs:
labels : list
Returns:
w, v. w is the eigenvalue vector; v is the eigenfunction array; seig is the eigenvalue vector of the metric s.

Examples:

>>> from pyscf import lib
>>> a = numpy.ones((4,4))
>>> a[0::3,0::3] = 0
>>> a[1::3,1::3] = 2
>>> a[2::3,2::3] = 4
>>> labels = ['a', 'b', 'c', 'a']
>>> lib.eigh_by_blocks(a, labels)
(array([ 0.,  0.,  2.,  4.]),
array([[ 1.,  0.,  0.,  0.],
[ 0.,  0.,  1.,  0.],
[ 0.,  0.,  0.,  1.],
[ 0.,  1.,  0.,  0.]]))
>>> numpy.linalg.eigh(a)
(array([ -8.82020545e-01,  -1.81556477e-16,   1.77653793e+00,   5.10548262e+00]),
array([[  6.40734630e-01,  -7.07106781e-01,   1.68598330e-01,   -2.47050070e-01],
[ -3.80616542e-01,   9.40505244e-17,   8.19944479e-01,   -4.27577008e-01],
[ -1.84524565e-01,   9.40505244e-17,  -5.20423152e-01,   -8.33732828e-01],
[  6.40734630e-01,   7.07106781e-01,   1.68598330e-01,   -2.47050070e-01]]))

>>> from pyscf import gto, lib, symm
>>> mol = gto.M(atom='H 0 0 0; H 0 0 1', basis='ccpvdz', symmetry=True)
>>> c = numpy.hstack(mol.symm_orb)
>>> vnuc_so = reduce(numpy.dot, (c.T, mol.intor('int1e_nuc_sph'), c))
>>> orbsym = symm.label_orb_symm(mol, mol.irrep_name, mol.symm_orb, c)
>>> lib.eigh_by_blocks(vnuc_so, labels=orbsym)
(array([-4.50766885, -1.80666351, -1.7808565 , -1.7808565 , -1.74189134,
-0.98998583, -0.98998583, -0.40322226, -0.30242374, -0.07608981]),
...)

lib.linalg_helper.krylov(aop, b, x0=None, tol=1e-10, max_cycle=30, dot=<built-in function dot>, lindep=1e-15, callback=None, hermi=False, max_memory=2000, verbose=2)[source]

Krylov subspace method to solve (1+a) x = b. Ref: J. A. Pople et al, Int. J. Quantum. Chem. Symp. 13, 225 (1979).

Args:
aop
: function(x) => array_like_x
aop(x) to mimic the matrix vector multiplication $$\sum_{j}a_{ij} x_j$$. The argument is a 1D array. The returned value is a 1D array.

b : a vector or a list of vectors

Kwargs:
x0
: 1D array
Initial guess
tol
: float
Tolerance to terminate the operation aop(x).
max_cycle
: int
max number of iterations.
lindep
: float
Linear dependency threshold. The function is terminated when the smallest eigenvalue of the metric of the trial vectors is lower than this threshold.
dot
: function(x, y) => scalar
Inner product
callback
: function(envs_dict) => None
callback function takes one dict as the argument which is generated by the builtin function locals(), so that the callback function can access all local variables in the current envrionment.
Returns:
x : ndarray like b

Examples:

>>> from pyscf import lib
>>> a = numpy.random.random((10,10)) * 1e-2
>>> b = numpy.random.random(10)
>>> aop = lambda x: numpy.dot(a,x)
>>> x = lib.krylov(aop, b)
>>> numpy.allclose(numpy.dot(a,x)+x, b)
True

lib.linalg_helper.make_diag_precond(diag, level_shift=0)[source]

Generate the preconditioner function with the diagonal function.

lib.linalg_helper.pick_real_eigs(w, v, nroots, x0)[source]

This function searchs the real eigenvalues or eigenvalues with small imaginary component, then constructs approximate real eigenvectors if quasi-real eigenvalues were found.

lib.linalg_helper.safe_eigh(h, s, lindep=1e-15)[source]

Solve generalized eigenvalue problem h v = w s v.

Note

The number of eigenvalues and eigenvectors might be less than the matrix dimension if linear dependency is found in metric s.

Args:
h, s
: 2D array
Complex Hermitian or real symmetric matrix.
Kwargs:
lindep
: float
Linear dependency threshold. By diagonalizing the metric s, we consider the eigenvectors are linearly dependent subsets if their eigenvalues are smaller than this threshold.
Returns:
w, v, seig. w is the eigenvalue vector; v is the eigenfunction array; seig is the eigenvalue vector of the metric s.

## 5.5. lib.chkfile¶

Chkfile is a HDF5 file.

### 5.5.1. Functions to access key/value in chkfile¶

lib.chkfile.save(chkfile, key, value)

Save array(s) in chkfile

Args:
chkfile
: str
Name of chkfile.
key
: str
key to be used in h5py object. It can contain “/” to represent the path in the HDF5 storage structure.
value
: array, vector, list ... or dict
If value is a python dict or list, the key/value of the dict will be saved recursively as the HDF5 group/dataset structure.
Returns:
No return value

Examples:

>>> import h5py
>>> from pyscf import lib
>>> ci = {'Ci' : {'op': ('E', 'i'), 'irrep': ('Ag', 'Au')}}
>>> lib.chkfile.save('symm.chk', 'symm', ci)
>>> f = h5py.File('symm.chk')
>>> f.keys()
['symm']
>>> f['symm'].keys()
['Ci']
>>> f['symm/Ci'].keys()
['op', 'irrep']
>>> f['symm/Ci/op']
<HDF5 dataset "op": shape (2,), type "|S1">

lib.chkfile.load(chkfile, key)[source]

Args:
chkfile
: str
Name of chkfile. The chkfile needs to be saved in HDF5 format.
key
: str
HDF5.dataset name or group name. If key is the name of a HDF5 group, the group will be loaded into a Python dict, recursively.
Returns:

Examples:

>>> from pyscf import gto, scf, lib
>>> mol = gto.M(atom='He 0 0 0')
>>> mf = scf.RHF(mol)
>>> mf.chkfile = 'He.chk'
>>> mf.kernel()
>>> mo_coeff.shape
(1, 1)
>>> scfdat.keys()
['e_tot', 'mo_occ', 'mo_energy', 'mo_coeff']

lib.chkfile.save_mol(mol, chkfile)[source]

Save Mole object in chkfile

Args:
chkfile str:
Name of chkfile.
Returns:
No return value
lib.chkfile.load_mol(chkfile)[source]

Load Mole object from chkfile. The save_mol/load_mol operation can be used a serialization method for Mole object.

Args:
chkfile
: str
Name of chkfile.
Returns:
A (initialized/built) Mole object

Examples:

>>> from pyscf import gto, lib
>>> mol = gto.M(atom='He 0 0 0')
>>> lib.chkfile.save_mol(mol, 'He.chk')
<pyscf.gto.mole.Mole object at 0x7fdcd94d7f50>


The results of SCF and MCSCF methods are saved as a Python dictionary in the chkfile. One can fast load the results and update the SCF and MCSCF objects using the python built in methods .__dict__.update, e.g.:

from pyscf import gto, scf, mcscf, lib
mol = gto.M(atom='N 0 0 0; N 1 1 1', basis='ccpvdz')
mf = mol.apply(scf.RHF).set(chkfile='n2.chk).run()
mc = mcscf.CASSCF(mf, 6, 6).set(chkfile='n2.chk').run()

mf = scf.RHF(mol)

mc = mcscf.CASCI(mf, 6, 6)
mc.kernel()


## 5.6. lib.diis¶

DIIS

class lib.diis.DIIS(dev=None, filename=None, incore=False)[source]

Direct inversion in the iterative subspace method.

Attributes:
space
: int
DIIS subspace size. The maximum number of the vectors to be stored.
min_space
The minimal size of subspace before DIIS extrapolation.
Functions:
update(x, xerr=None) :
If xerr the error vector is given, this function will push the target vector and error vector in the DIIS subspace, and use the error vector to extrapolate the vector and return the extrapolated vector. If xerr is None, this function will take the difference between the current given vector and the last given vector as the error vector to extrapolate the vector.

Examples:

>>> from pyscf import gto, scf, lib
>>> mol = gto.M(atom='H 0 0 0; H 0 0 1', basis='ccpvdz')
>>> mf = scf.RHF(mol)
>>> h = mf.get_hcore()
>>> s = mf.get_ovlp()
>>> e, c = mf.eig(h, s)
>>> occ = mf.get_occ(e, c)
>>> # DIIS without error vector
>>> for i in range(7):
...     dm = mf.make_rdm1(c, occ)
...     f = h + mf.get_veff(mol, dm)
...     if i > 1:
...     e, c = mf.eig(f, s)
...     print('E_%d = %.12f' % (i, mf.energy_tot(dm, h, mf.get_veff(mol, dm))))
E_0 = -1.050329433306
E_1 = -1.098566175145
E_2 = -1.100103795287
E_3 = -1.100152104615
E_4 = -1.100153706922
E_5 = -1.100153764848
E_6 = -1.100153764878

>>> # Take Hartree-Fock gradients as the error vector
>>> for i in range(7):
...     dm = mf.make_rdm1(c, occ)
...     f = h + mf.get_veff(mol, dm)
...     if i > 1:
...     e, c = mf.eig(f, s)
...     print('E_%d = %.12f' % (i, mf.energy_tot(dm, h, mf.get_veff(mol, dm))))
E_0 = -1.050329433306
E_1 = -1.098566175145
E_2 = -1.100103795287
E_3 = -1.100152104615
E_4 = -1.100153763813
E_5 = -1.100153764878
E_6 = -1.100153764878

restore(filename, inplace=True)[source]

Read diis contents from a diis file and replace the attributes of current diis object if needed, then construct the vector.

update(x, xerr=None)[source]

Extrapolate vector

• If xerr the error vector is given, this function will push the target

vector and error vector in the DIIS subspace, and use the error vector to extrapolate the vector and return the extrapolated vector. * If xerr is None, this function will take the difference between the current given vector and the last given vector as the error vector to extrapolate the vector.

lib.diis.restore(filename)[source]

Restore/construct diis object based on a diis file

## 5.7. Other helper functions¶

### 5.7.1. Background mode¶

lib.call_in_background(*fns, **kwargs)[source]

Within this macro, function(s) can be executed asynchronously (the given functions are executed in background).

Attributes:
sync (bool): Whether to run in synchronized mode. The default value
is False (asynchoronized mode).

Examples:

>>> with call_in_background(fun) as async_fun:
...     async_fun(a, b)  # == fun(a, b)
...     do_something_else()

>>> with call_in_background(fun1, fun2) as (afun1, afun2):
...     afun2(a, b)
...     do_something_else()
...     afun2(a, b)
...     do_something_else()
...     afun1(a, b)
...     do_something_else()


### 5.7.2. Temporary HDF5 file¶

class lib.H5TmpFile(filename=None, *args, **kwargs)[source]

Create and return an HDF5 temporary file.

Kwargs:
filename
: str or None
If a string is given, an HDF5 file of the given filename will be created. The temporary file will exist even if the H5TmpFile object is released. If nothing is specified, the HDF5 temporary file will be deleted when the H5TmpFile object is released.

The return object is an h5py.File object. The file will be automatically deleted when it is closed or the object is released (unless filename is specified).

Examples:

>>> from pyscf import lib
>>> ftmp = lib.H5TmpFile()


lib.num_threads(n=None)[source]

Set the number of OMP threads. If argument is not specified, the function will return the total number of available OMP threads.

Examples:

>>> from pyscf import lib
8
4
4

class lib.with_omp_threads(nthreads=None)[source]

Using this macro to create a temporary context in which the number of OpenMP threads are set to the required value. When the program exits the context, the number OpenMP threads will be restored.

Args:

Examples:

>>> from pyscf import lib
8
2
8


### 5.7.4. Capture stdout¶

class lib.capture_stdout[source]

redirect all stdout (c printf & python print) into a string

Examples:

>>> import os
>>> from pyscf import lib
>>> with lib.capture_stdout as out:
...     os.system('ls')


### 5.7.5. Other helper functions in lib.misc¶

Some hacky functions

lib.misc.flatten(lst)[source]

flatten nested lists x[0] + x[1] + x[2] + ...

Examples:

>>> flatten([[0, 2], [1], [[9, 8, 7]]])
[0, 2, 1, [9, 8, 7]]

class lib.misc.light_speed(c)[source]

Within the context of this macro, the environment varialbe LIGHT_SPEED can be customized.

Examples:

>>> with light_speed(15.):
...     print(lib.param.LIGHT_SPEED)
15.
>>> print(lib.param.LIGHT_SPEED)
137.03599967994