inquanto.minimizers
Module provides access to minimizers compatible with variational experiments.
- class MinimizerRotosolve(max_iterations=20, tolerance=1e-4, disp=False, order_independence=True)
Bases:
GeneralMinimizer
The Rotosolve minimizer, introduced in Quantum 5, 391 (2021).
This learns the minimum of an estimator with a sinusoidal energy landscape.
- Parameters:
max_iterations (
int
, default:20
) – Maximum number of iterations allowed before the minimization is terminated.tolerance (
float
, default:1e-4
) – Tolerance for convergence.disp (
bool
, default:False
) – IfTrue
, print information to the screen throughout minimization.order_independence (
bool
, default:True
) – IfFalse
, the minimizer depends on the order of parameters. IfTrue
, the minimizer operates independently of parameter order.
- generate_report()
Generates a summary of the minimization.
- Returns:
dict
– A dictionary containing the number of iterations (n_iterations
), the final value (final_value
) and final parameters (final_parameters
).
- minimize(function, initial)
Minimize the function provided.
Minimization starts at the parameters provided by the initial argument.
- Parameters:
- Returns:
tuple
[float
,ndarray
] – A tuple containing the final value and parameters obtained by the minimization.- Raises:
RuntimeError – If the optimizer does not converge within the maximum number of iterations.
- class MinimizerSGD(learning_rate=0.01, decay_rate=0.05, max_iterations=100, disp=False, callback=None)
Bases:
GeneralMinimizer
Uses the gradient and geometry of the objective function to accelerate minimization.
Introduced in Quantum 4, 269 (2020).
- Parameters:
learning_rate (
float
, default:0.01
) – Stepsize in direction of descent.decay_rate (
float
, default:0.05
) – User defined decay rate.max_iterations (
int
, default:100
) – Maximum number of iterations allowed before variational loop is terminated.disp (
bool
, default:False
) – IfTrue
, displays minimization history.callback (
Optional
[Callable
], default:None
) – Custom callback for minimizer.
- generate_report()
Generates a report summarizing the minimization.
Includes the final value, the final parameters and the number of iterations performed.
- Returns:
dict
– A dictionary containing the final value, parameters and number of iterations obtained by the minimizer.
- class MinimizerSPSA(max_iterations=int(1e5), tolerance=1e-4, disp=False)
Bases:
GeneralMinimizer
The Simultaneous Perturbation Stochastic Approximation (SPSA) minimizer.
Implementation details are based on https://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF
- Parameters:
max_iterations (
int
, default:int(1e5)
) – Maximum number of iterations allowed before the minimization is terminated.disp (
bool
, default:False
) – IfTrue
, print information to the screen throughout minimization.tolerance (
float
, default:1e-4
) – Tolerance for convergence.disp – If
True
, print information to the screen throughout minimization.
- generate_report()
Generate a report summarizing the minimization.
Includes the final value and the final parameters performed.
- Returns:
dict
– A dictionary containing the final value and parameters obtained by the minimizer.
- minimize(function, initial, alpha=0.602, gamma=0.101, a=0.5, c=0.2, stability_constant=None, perturbation_samples=1, perturbation_samples_init=None, gradient_smoothing=False)
Minimize the function provided.
Minimization starts at the parameters provided by the initial argument.
- Parameters:
function (
Callable
[[Union
[ndarray
,list
[float
]]],float
]) – Objective function to minimize.initial (
Union
[ndarray
,list
[float
]]) – Initial parameters to minimize.alpha (
float
, default:0.602
) – The exponent of the learning rate powerseries.gamma (
float
, default:0.101
) – The exponent of the perturbation powerseries.a (
float
, default:0.5
) – The numerator of the initial learning rate magnitude.c (
float
, default:0.2
) – The initial perturbation magnitude.stability_constant (
Optional
[float
], default:None
) – The denominator of the initial learning rate magnitude.perturbation_samples (
int
, default:1
) – The number of perturbation samples used for gradient approximation.perturbation_samples_init (
Optional
[int
], default:None
) – The number of perturbation samples used for the initial gradient approximation. IfNone
, this will the same asperturbation_samples
.gradient_smoothing (
bool
, default:False
) – IfTrue
, the gradient approximation is based on previous approximations.
- Returns:
tuple
[float
,Union
[ndarray
,list
[float
]]] – The final value and parameters obtained by the minimization.
- class MinimizerScipy(method=OptimizationMethod.L_BFGS_B_smooth, options=None, disp=False, callback=None)
Bases:
GeneralMinimizer
A simple wrapper for SciPy minimization routines.
More minimizer details can be found in the SciPy documentation.
- Parameters:
method (
OptimizationMethod
|str
, default:OptimizationMethod.L_BFGS_B_smooth
) – The method to use. Popular methods to choose from include"CG"
,"BFGS"
,"SLSQP"
, and"COBYLA"
. For the L-BFGS-B method,the"OptimizationMethod"
enum is used to conveniently specify the optimization method along with its associated default parameters.options (
Optional
[dict
], default:None
) – Options used for calibration which are passed through to the SciPy minimization.argument. (This overrides any default settings provided by the method)
disp (
bool
, default:False
) – IfTrue
, prints minimization history.callback (
Optional
[Callable
], default:None
) – Custom callback function.
- generate_report()
Generates a report containing a summary of the minimization.
- Returns:
dict
– A dictionary containing the final value and location of the final value from the minimization.- Raises:
ValueError – If no result is available.
- property method: str
Get the method being used by the optimizer as a string.
- Returns:
The name of the minimization algorithm used by the minimizer.
- minimize(function, initial, gradient=None)
Minimize the function provided.
The minimization starts at the parameters provided in the initial argument, and the gradient (if provided) is used to aid the minimization and is evaluated by calling the gradient argument.
- Parameters:
- Returns:
tuple
[float
,ndarray
] – The value of the function at the minimum and the location of the minimum.- Raises:
ValueError – If optimization process fails.
- class NaiveEulerIntegrator(time_eval, disp=False, callback=None, linear_solver=GeneralIntegrator.linear_solver_scipy_linalg)
Bases:
GeneralIntegrator
A simple Euler integrator to solve time evolution problems.
- Parameters:
time_eval (
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]) – A monotonically increasing or decreasing sequence of time points at which derivatives are evaluated.disp (
bool
, default:False
) – IfTrue
, print information to the screen throughout minimization.callback (
Optional
[Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],float
,ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]],Any
]], default:None
) – An optional function \(f(p, t, x)\), where \(p\) are the parameters of the differential equation, \(t\) is the time at which the derivatives are evaluated and \(x\) are the derivatives.linear_solver (
Optional
[Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]]], default:GeneralIntegrator.linear_solver_scipy_linalg
) – An optional solver for the derivative at time \(t\).
- static linear_solver_scipy_linalg(a, b)
A wrapper for the
scipy.linalg.solve()
method.Solves the linear equation
a @ x == b
for the unknownx
for the square a matrix. More information can be found in the SciPy documentation.
- static linear_solver_scipy_pinvh(a, b)
Linear equation solver using
scipy.linalg.pinvh()
.Solves the linear equation
a @ x == b
for the unknownx
using the (Moore-Penrose) pseudo-inverse of a Hermitian matrix,a
. More information onscipy.linalg.pinvh()
can be found in the SciPy documentation.
- solve(linear_problem, initial, *args, **kwargs)
Solve the differential equation.
- Parameters:
linear_problem (
Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],float
],tuple
[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]]]) – A function \(f(p, t) \mapsto A,b\) which takes the parameters and time and returns the linear problem (matrix \(A(t)\), vector \(b(t)\) of \(A*x=b\)) at a time \(t\).initial (
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]) – Initial parameters.args (
Any
)kwargs (
Any
)
- Returns:
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]] –:- Array containing the solution of the differential equation for each time in
time_eval
, with the initial value in the first row.
- Array containing the solution of the differential equation for each time in
- class OptimizationMethod(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumeration of optimization methods with associated parameters.
Each enum member represents an optimization method along with a dictionary of parameters used in the optimization process. These settings are recommended based on empirical testing for optimal performance.
- L_BFGS_B_smooth
method:
"L-BFGS-B"
. Applicability: This method is more suitable for smoother and more fine-grained optimization. ftol: Tolerance for termination. The iteration stops when \(\\frac{{f^k - f^{k+1}}}{{\\max\\left(|f^k|,|f^{k+1}|,1\\right)}} \\leq \\text{{ftol}}\). eps: Absolute step size used for numerical approximation of the Jacobian via forward differences.
- L_BFGS_B_coarse
method:
"L-BFGS-B"
. Applicability: This method is generally more suitable for coarser and more rapid optimization and may be preferable for optimizing noisy objective functions. ftol: Tolerance for termination. The iteration stops when \(\\frac{{f^k - f^{k+1}}}{{\\max\\left(|f^k|,|f^{k+1}|,1\\right)}} \\leq \\text{{ftol}}\). eps: Absolute step size used for numerical approximation of the Jacobian via forward differences.
- L_BFGS_B_coarse = ('L-BFGS-B', {'eps': 0.1, 'ftol': 0.0001})
- L_BFGS_B_smooth = ('L-BFGS-B', {'eps': 1e-08, 'ftol': 2.2e-09})
- class ScipyIVPIntegrator(time_eval, disp=False, callback=None, linear_solver=GeneralIntegrator.linear_solver_scipy_linalg)
Bases:
GeneralIntegrator
A simple wrapper for SciPy
solve_ivp()
for linear problems.More details about
solve_ivp()
can be found in the SciPy documentation.- Parameters:
time_eval (
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]) – A monotonically increasing or decreasing sequence of time points at which derivatives are evaluated.disp (
bool
, default:False
) – IfTrue
, print information to the screen throughout minimization.callback (
Optional
[Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],float
,ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]],Any
]], default:None
) – An optional function \(f(p, t, x)\), where \(p\) are the parameters of the differential equation, \(t\) is the time at which the derivatives are evaluated and \(x\) are the derivatives.linear_solver (
Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]], default:GeneralIntegrator.linear_solver_scipy_linalg
) – An optional solver for the derivative at time \(t\).
- static linear_solver_scipy_linalg(a, b)
A wrapper for the
scipy.linalg.solve()
method.Solves the linear equation
a @ x == b
for the unknownx
for the square a matrix. More information can be found in the SciPy documentation.
- static linear_solver_scipy_pinvh(a, b)
Linear equation solver using
scipy.linalg.pinvh()
.Solves the linear equation
a @ x == b
for the unknownx
using the (Moore-Penrose) pseudo-inverse of a Hermitian matrix,a
. More information onscipy.linalg.pinvh()
can be found in the SciPy documentation.
- solve(linear_problem, initial, *args, **kwargs)
Solve the differential equation.
- Parameters:
linear_problem (
Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],float
],tuple
[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]]]) – A function \(f(p, t) \mapsto A,b\) which takes the parameters and time and returns the linear problem (matrix \(A(t)\), vector \(b(t)\) of \(A*x=b\)) at a time \(t\).initial (
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]) – Initial parameters.args (
Any
)kwargs (
Any
)
- Returns:
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]] – Array containing the solution of the differential equation for each time intime_eval
, with the initial value in the first row.
- class ScipyODEIntegrator(time_eval, disp=False, callback=None, linear_solver=GeneralIntegrator.linear_solver_scipy_linalg)
Bases:
GeneralIntegrator
A simple wrapper for SciPy
odeint()
for linear problems.More details about
odeint()
can be found in the SciPy documentation.- Parameters:
time_eval (
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]) – A monotonically increasing or decreasing sequence of time points at which derivatives are evaluated.disp (
bool
, default:False
) – If True, print information to the screen throughout minimization.callback (
Optional
[Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],float
,ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]],Any
]], default:None
) – An optional function \(f(p, t, x)\), where \(p\) are the parameters of the differential equation, \(t\) is the time at which the derivatives are evaluated and \(x\) are the derivatives.linear_solver (
Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]], default:GeneralIntegrator.linear_solver_scipy_linalg
) – An optional solver for the derivative at time \(t\).
- static linear_solver_scipy_linalg(a, b)
A wrapper for the
scipy.linalg.solve()
method.Solves the linear equation
a @ x == b
for the unknownx
for the square a matrix. More information can be found in the SciPy documentation.
- static linear_solver_scipy_pinvh(a, b)
Linear equation solver using
scipy.linalg.pinvh()
.Solves the linear equation
a @ x == b
for the unknownx
using the (Moore-Penrose) pseudo-inverse of a Hermitian matrix,a
. More information onscipy.linalg.pinvh()
can be found in the SciPy documentation.
- solve(linear_problem, initial, *args, **kwargs)
Solve the differential equation.
- Parameters:
linear_problem (
Callable
[[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],float
],tuple
[ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]],ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]]]) – A function \(f(p, t) \mapsto A,b\) which takes the parameters and time and returns theproblem (linear)
initial (
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]) – Initial parameters.args (
Any
)kwargs (
Any
)
- Returns:
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]] – Array containing the solution to the problem for each time intime_eval
, with the initial value in the first row.