Derivative Operators
A technical manual for derivatives in InfiniteOpt
. See the respective guide for more information.
Definition
InfiniteOpt.deriv
— Functionderiv(expr::JuMP.AbstractJuMPScalar, pref1::GeneralVariableRef[, ....]
)::Union{JuMP.AbstractJuMPScalar, Float64}
Apply appropriate calculus methods to define and return the derivative expression of expr
with respect to the infinite parameter(s) pref1
, pref2, etc. in that respective order. This will implicilty build and add individual [
Derivative`](@ref)s as appropriate. Errors if no infinite parameter is given or if the parameters are not infinite.
Example
julia> @infinite_parameter(m, t in [0, 1])
t
julia> @variable(m, x, Infinite(t))
x(t)
julia> @variable(m, z)
z
julia> deriv_expr = deriv(x^2 + z, t, t)
2 ∂/∂t[∂/∂t[x(t)]]*x(t) + 2 ∂/∂t[x(t)]²
InfiniteOpt.∂
— Function∂(expr::JuMP.AbstractJuMPScalar, pref1::GeneralVariableRef[, ....]
)::Union{JuMP.AbstractJuMPScalar, Float64}
This serves as a convenient unicode wrapper for deriv
. The ∂
is produced via \partial
.
InfiniteOpt.@deriv
— Macro@deriv(expr, pref_expr1[, ...]
)::Union{JuMP.AbstractJuMPScalar, Float64}
The macro variant of deriv
that is more efficient for expression building and enables symbolic differential operator parameter defintions via pref_expr
s. Like deriv
expr can be any InfiniteOpt expression and the appropriate calculus rules will applied to expr
to take its derivative with respect to the indicated infinite parameters detailed by the pref_expr
s. The resulting derivative expression will contain individual derivatives that were created and added to the InfiniteModel as needed. Here each pref_expr
arugment can be of the form:
pref::GeneralVariableRef
: An indiviudal infinite parameter reference(pref::GeneralVariableRef)^(p::Int)
: An infinite parameter appliedp
times.
Thus, the syntax @deriv(expr, pref^2)
is equivalent to @deriv(expr, pref, pref)
.
This will error if pref_expr
is an unrecongnized syntax, no infinite parameter is given, or if any of the specified parameters are not infinite.
Example
julia> @infinite_parameter(m, t in [0, 1])
t
julia> @variable(m, x, Infinite(t))
x(t)
julia> @variable(m, z)
z
julia> deriv_expr = @deriv(x^2 + z, t^2)
2 ∂/∂t[∂/∂t[x(t)]]*x(t) + 2 ∂/∂t[x(t)]²
InfiniteOpt.@∂
— Macro@∂(expr, pref_expr1[, ...])::Union{JuMP.AbstractJuMPScalar, Float64}
This serves as a convenient unicode wrapper for @deriv
. The ∂
is produced via \partial
.
InfiniteOpt.Deriv
— TypeDeriv{V, P} <: InfOptVariableType
A DataType
to assist in making derivative variables. This can be passed as an extra argument to @variable
to make such a variable:
@variable(model, var_expr, Deriv(inf_var, inf_par), kwargs...)
Here inf_var
is the infinite variable that is being operated on and inf_par
is the infinite parameter that the derivative is defined with respect to.
Fields
argument::V
: The infinite variable being operated on.operator_parameter::P:
The infinite parameter that determines the derivative.
JuMP.build_variable
— MethodJuMP.build_variable(_error::Function, info::JuMP.VariableInfo,
var_type::Deriv)::InfiniteVariable{GeneralVariableRef}
Build and return a first order derivative based on info
and var_type
. Errors if the information in var_type
is invalid. See Deriv
for more information.
Example
julia> info = VariableInfo(false, 0, false, 0, false, 0, true, 0, false, false);
julia> deriv_var = build_variable(error, info, Deriv(y, t));
InfiniteOpt.build_derivative
— Functionbuild_derivative(_error::Function, info::JuMP.VariableInfo,
argument_ref::GeneralVariableRef,
parameter_ref::GeneralVariableRef
)::Derivative
Constructs and returns a Derivative
with a differential operator that depends on parameter_ref
and operates on argument_ref
. Variable info
can also be provided to associate this derivative with bounds and a starting value function like that of infinite variables. Errors when argument_ref
is not an infinite/semi-infinite variable or derivative that depends on parameter_ref
.
Example ```julia-repl julia> @infiniteparameter(m, t in [0, 1]); @infinitevariable(m, x(t));
julia> info = VariableInfo(false, 0, false, 0, false, 0, false, 0, false, false);
julia> buildderivative(error, info, x, t) Derivative{GeneralVariableRef}(VariableInfo{Float64,Float64,Float64,Function}(false, 0.0, false, 0.0, false, 0.0, false, startfunc, false, false), true, x(t), t) ````
InfiniteOpt.Derivative
— TypeDerivative{F <: Function, V <: GeneralVariableRef} <: JuMP.AbstractVariable
A DataType
for storing core infinite derivative information. This follows a derivative of the form: $\frac{\partial x(\alpha, \hdots)}{\partial \alpha}$ where $x(\alpha, \hdots)$ is an infinite variable and $\alpha$ is an infinite parameter. Here, both $x$ and $\alpha$ must be scalars.
It is important to note that info.start
should contain a start value function that generates the start value for a given infinite parameter support. This function should map a support to a start value using user-formatting if is_vector_start = false
, otherwise it should do the mapping using a single support vector as input. Also, the variable reference type V
must pertain to infinite variables and parameters.
Fields
info::JuMP.VariableInfo{Float64, Float64, Float64, F}
: JuMP variable information.is_vector_start::Bool
: Does the start function take support values formatted as vectors?variable_ref::V
: The variable reference of the infinite variable argument.parameter_ref::V
: The variable reference of the infinite parameter the defines the differential operator.
InfiniteOpt.add_derivative
— Functionadd_derivative(model::InfiniteModel, d::Derivative,
[name::String = ""])::GeneralVariableRef
Adds a derivative d
to model
and returns a GeneralVariableRef
that points to it. Errors if the derivative dependencies do not belong to model
. Note that d
should be built using build_derivative
to avoid nuance internal errors.
Example
julia> @infinite_parameter(m, t in [0, 1]); @variable(m, x, Infinite(t));
julia> info = VariableInfo(false, 0, false, 0, false, 0, false, 0, false, false);
julia> d = build_derivative(error, info, x, t);
julia> dref = add_derivative(m, d)
∂/∂t[x(t)]
InfiniteOpt.DerivativeIndex
— TypeDerivativeIndex <: ObjectIndex
A DataType
for storing the index of a Derivative
.
Fields
value::Int64
: The index value.
InfiniteOpt.DerivativeRef
— TypeDerivativeRef <: DispatchVariableRef
A DataType
for untranscripted derivative references.
Fields
model::InfiniteModel
: Infinite model.index::DerivativeIndex
: Index of the derivative in model.
Queries
InfiniteOpt.derivative_argument
— Methodderivative_argument(dref::DerivativeRef)::GeneralVariableRef
Returns the infinite variable/derivative reference that is the input the differential operator (i.e., the dependent variable of the derivative).
Example
julia> derivative_argument(dref)
x(t)
InfiniteOpt.operator_parameter
— Methodoperator_parameter(dref::DerivativeRef)::GeneralVariableRef
Returns the infinite parameter reference that is what the differential operator is operating with respect to (i.e., the independent variable of the derivative).
Example
julia> operator_parameter(dref)
t
InfiniteOpt.num_derivatives
— Functionnum_derivatives(model::InfiniteModel)::Int
Returns the number of derivatives that have been defined in model
. Note that nested derivatives will be counted in accordance with their components (e.g., $\frac{d^2 x(t)}{dt^2} =$\frac{d}{dt}\left(\frac{d x(t)}{dt} \right)`` will count as 2 derivatives.)
Example
julia> num_derivatives(model)
12
InfiniteOpt.all_derivatives
— Functionall_derivatives(model::InfiniteModel)::Vector{GeneralVariableRef}
Returns a list of all the individual derivatives stored in model
.
Example
julia> all_derivatives(model)
3-element Array{GeneralVariableRef,1}:
∂/∂t[T(x, t)]
∂/∂x[T(x, t)]
∂/∂x[∂/∂x[T(x, t)]]
InfiniteOpt.parameter_refs
— Methodparameter_refs(dref::DerivativeRef)::Tuple
Return the parameter references associated with the infinite derivative dref
. This is formatted as a Tuple of containing the parameter references as they inputted to define dref
.
Example
julia> parameter_refs(deriv)
(t,)
InfiniteOpt.parameter_list
— Methodparameter_list(dref::DerivativeRef)::Vector{GeneralVariableRef}
Return a vector of the parameter references that dref
depends on. This is primarily an internal method where parameter_refs
is intended as the preferred user function.
InfiniteOpt.raw_parameter_refs
— Methodraw_parameter_refs(dref::DerivativeRef)::VectorTuple
Return the raw VectorTuple
of the parameter references that dref
depends on. This is primarily an internal method where parameter_refs
is intended as the preferred user function.
Modification
InfiniteOpt.set_start_value_function
— Methodset_start_value_function(dref::DerivativeRef,
start::Union{Real, Function})::Nothing
Set the start value function of dref
. If start::Real
then a function is generated to such that the start value will be start
for the entire infinite domain. If start::Function
then this function should map to a scalar start value given a support value arguments matching the format of the parameter elements in parameter_refs(dref)
.
Example
julia> set_start_value_function(dref, 1) # all start values will be 1
julia> set_start_value_function(dref, my_func) # each value will be made via my_func
InfiniteOpt.reset_start_value_function
— Methodreset_start_value_function(dref::DerivativeRef)::Nothing
Remove the existing start value function and return to the default. Generally, this is triggered by deleting an infinite parameter that dref
depends on.
Example
julia> reset_start_value_function(dref)
Evaluation
InfiniteOpt.AbstractDerivativeMethod
— TypeAbstractDerivativeMethod
An abstract type for storing derivative evaluation data that is pertinent to its reformation/transcription.
InfiniteOpt.GenerativeDerivativeMethod
— TypeGenerativeDerivativeMethod <: AbstractDerivativeMethod
An abstract type for derivative evaluation method types that will require support generation when employed (e.g., internal node points associated with orthogonal collocation). Such methods can be used with derivatives that depend on independent infinite parameters, but cannot be used for ones that depend on dependent parameters.
InfiniteOpt.OrthogonalCollocation
— TypeOrthogonalCollocation{Q <: MeasureToolbox.AbstractUnivariateMethod
} <: GenerativeDerivativeMethod
A DataType
for storing information about orthogonal collocation over finite elements to approximate derivatives. The constructor is of the form:
OrthogonalCollocation(num_nodes::Int,
[quad::AbstractUnivariateMethod = GaussLobatto])
Fields
num_nodes::Int
: The number of collocation points (nodes) per finite element.quadrature_method::Q
: The quadrature method uses to choose the collocation points.
InfiniteOpt.NonGenerativeDerivativeMethod
— TypeNonGenerativeDerivativeMethod <: AbstractDerivativeMethod
An abstract type for derivative evaluation method types that do not require the definition of additional support points. Such methods are amendable to any derivative in InfiniteOpt including those with dependent infinite parameter dependencies.
InfiniteOpt.FiniteDifference
— TypeFiniteDifference{T <: FDTechnique} <: NonGenerativeDerivativeMethod
A DataType
for information about finite difference method applied to a derivative evaluation. Note that the constructor is of the form:
FiniteDifference([technique::FDTechnique = Backward()],
[add_boundary_constr::Bool = true])
where technique
is the indicated finite difference method to be applied and add_boundary_constr
indicates if the finite difference equation corresponding to a boundary support should be included. Thus, for backward difference since corresponds to the terminal point and for forward difference this corresponds to the initial point. We recommend using add_boundary_constr = false
when an final condition is given with a backward method or when an initial condition is given with a forward method. Note that this argument is ignored for central finite difference which cannot include any boundary points.
Fields
technique::T
: Mathematical technqiue behind finite differenceadd_boundary_constraint::Bool
: Indicate if the boundary constraint should be included in the transcription (e.g., the terminal boundary backward equation for backward difference)
InfiniteOpt.FDTechnique
— TypeFDTechnique
An abstract data type for labels of specific techniques applied in the finite difference method in derivative evaluation.
InfiniteOpt.Forward
— TypeForward <: FDTechnique
A technique label for finite difference method that implements a forward difference approximation.
InfiniteOpt.Central
— TypeCentral <: FDTechnique
A technique label for finite difference method that implements a central difference approximation.
InfiniteOpt.Backward
— TypeBackward <: FDTechnique
A technique label for finite difference method that implements a backward difference approximation.
InfiniteOpt.derivative_method
— Methodderivative_method(dref::DerivativeRef)::AbstractDerivativeMethod
Returns the evaluation method employed by dref
that determines the numerical computation scheme that will be used to evaluate the derivative. Note that this is set on by the infinite parameter with respect to which the derivative is defined.
Example
julia> derivative_method(dref)
FiniteDifference(Backward, true)
InfiniteOpt.set_derivative_method
— Methodset_derivative_method(pref::IndependentParameterRef,
method::AbstractDerivativeMethod)::Nothing
Specfies the desired derivative evaluation method method
for derivatives that are taken with respect to pref
. Any internal supports exclusively associated with the previous method will be deleted. Also, if any derivatives were evaluated manually, the associated derivative evaluation constraints will be deleted. Errors if new derivative method generates supports that are incompatible with existing measures.
Example
julia> set_derivative_method(d, OrthogonalCollocation(2))
InfiniteOpt.set_derivative_method
— Methodset_derivative_method(pref::DependentParameterRef,
method::NonGenerativeDerivativeMethod)::Nothing
Specfies the desired derivative evaluation method method
for derivatives that are taken with respect to pref
. Errors if method
is generative (i.e., it requires the definition of additional supports)
Example
julia> set_derivative_method(d, FiniteDifference())
InfiniteOpt.set_all_derivative_methods
— Functionset_all_derivative_methods(model::InfiniteModel,
method::AbstractDerivativeMethod)::Nothing
Sets the desired evaluation method method
for all the derivatives currently added to model
. Note that this is done with respect to the infinite parameters. Errors if a generative method is specified and the model contains dependent parameters.
Example
julia> set_all_derivative_methods(model, OrthogonalCollocation(2))
InfiniteOpt.evaluate
— Methodevaluate(dref::DerivativeRef)::Nothing
Numerically evaluate dref
by computing its auxiliary derivative constraints (e.g., collocation equations) and add them to the model. For normal usage, it is recommended that this method not be called directly and instead have TranscriptionOpt handle these equations. Errors if evaluate_derivative
is not defined for the derivative method employed.
The resulting constraints can be accessed via derivative_constraints
.
Example
julia> m = InfiniteModel(); @infinite_parameter(m, t in [0,2]); @variable(m, T, Infinite(t));
julia> dref = @deriv(T,t)
∂/∂t[T(t)]
julia> add_supports(t, [0, 0.5, 1, 1.5, 2])
julia> evaluate(dref)
julia> derivative_constraints(dref)
Feasibility
4-element Array{InfOptConstraintRef,1}:
0.5 ∂/∂t[T(t)](0.5) - T(0.5) + T(0) = 0.0
0.5 ∂/∂t[T(t)](1) - T(1) + T(0.5) = 0.0
0.5 ∂/∂t[T(t)](1.5) - T(1.5) + T(1) = 0.0
0.5 ∂/∂t[T(t)](2) - T(2) + T(1.5) = 0.0
InfiniteOpt.evaluate_all_derivatives!
— Functionevaluate_all_derivatives!(model::InfiniteModel)::Nothing
Evaluate all the derivatives in model
by adding the corresponding auxiliary equations to model
. See evaluate
for more information.
Example
julia> m = InfiniteModel();
julia> @infinite_parameter(m, t in [0,2], supports = [0, 1, 2]);
julia> @infinite_parameter(m, x in [0,1], supports = [0, 0.5, 1]);
julia> @variable(m, T, Infinite(x, t));
julia> dref1 = @deriv(T, t); dref2 = @deriv(T, x^2);
julia> evaluate_all_derivatives!(m)
julia> print(m)
Feasibility
Subject to
∂/∂t[T(x, t)](x, 1) - T(x, 1) + T(x, 0) = 0.0, ∀ x ∈ [0, 1]
∂/∂t[T(x, t)](x, 2) - T(x, 2) + T(x, 1) = 0.0, ∀ x ∈ [0, 1]
0.5 ∂/∂x[T(x, t)](0.5, t) - T(0.5, t) + T(0, t) = 0.0, ∀ t ∈ [0, 2]
0.5 ∂/∂x[T(x, t)](1, t) - T(1, t) + T(0.5, t) = 0.0, ∀ t ∈ [0, 2]
0.5 ∂/∂x[∂/∂x[T(x, t)]](0.5, t) - ∂/∂x[T(x, t)](0.5, t) + ∂/∂x[T(x, t)](0, t) = 0.0, ∀ t ∈ [0, 2]
0.5 ∂/∂x[∂/∂x[T(x, t)]](1, t) - ∂/∂x[T(x, t)](1, t) + ∂/∂x[T(x, t)](0.5, t) = 0.0, ∀ t ∈ [0, 2]
InfiniteOpt.has_derivative_constraints
— Methodhas_derivative_constraints(dref::DerivativeRef)::Bool
Return a Bool
whether dref
has been evaluated within the InfiniteModel
and has derivative constraints that have been added to the InfiniteModel
. Note this does not indicate if such constraints have been added to the optimizer model. Thus, with normal usage (i.e., not using evaluate
) this should always return false
.
InfiniteOpt.derivative_constraints
— Methodderivative_constraints(dref::DerivativeRef)::Vector{InfOptConstraintRef}
Return a list of the derivative evaluation constraints for dref
that have been added directly to the InfiniteModel
associated with dref
. An empty vector is returned is there are no such constraints.
InfiniteOpt.delete_derivative_constraints
— Methoddelete_derivative_constraints(dref::DerivativeRef)::Nothing
Delete any derivative constraints of dref
that have been directly added to the InfiniteModel
.
InfiniteOpt.evaluate_derivative
— Functionevaluate_derivative(dref::GeneralVariableRef,
method::AbstractDerivativeMethod,
write_model::JuMP.AbstractModel)::Vector{JuMP.AbstractJuMPScalar}
Build expressions for derivative dref
evaluated in accordance with method
. The expressions are of the form lhs - rhs
, where lhs
is a function of derivatives evaluated at some supports for certain infinite parameter, and rhs
is a function of the derivative arguments evaluated at some supports for certain infinite parameter. For example, for finite difference methods at point t = 1
, lhs
is Δt * ∂/∂t[T(1)]
, and rhs
could be T(1+Δt) - T(1)
in case of forward difference mode. This is intended as a helper function for evaluate
, which will take the the expressions generated by this method and generate constraints that approximate the derivative values by setting the expressions as 0. However, one can extend this function to encode custom methods for approximating derivatives. This should invoke add_derivative_supports
if the method is generative and users will likely find it convenient to use make_reduced_expr
.
InfiniteOpt.generative_support_info
— Methodgenerative_support_info(method::AbstractDerivativeMethod)::AbstractGenerativeInfo
Return the AbstractGenerativeInfo
associated with method
. This is intended as an internal method and should be extended for user-defined derivative methods are GenerativeDerivativeMethod
s.
InfiniteOpt.support_label
— Methodsupport_label(method::GenerativeDerivativeMethod)
Return the support label associated with method
if there is one, errors otherwise. This depends on generative_support_info
being defined for the type of method
.
InfiniteOpt.make_reduced_expr
— Functionmake_reduced_expr(vref::GeneralVariableRef, pref::GeneralVariableRef,
support::Float64, write_model::Union{InfiniteModel, JuMP.Model})
Given the argument variable vref
and the operator parameter pref
from a derivative, build and return the reduced expression in accordance to the support support
with respect to pref
. New point/semi-infinite variables will be written to write_model
. This is solely intended as a helper function for derivative evaluation.