pydrake.solvers
Bindings for Solving Mathematical Programs.
If you are formulating constraints using symbolic formulas, please review the
top-level documentation for pydrake.math
.
- pydrake.solvers.AddBilinearProductMcCormickEnvelopeSos2(prog: pydrake.solvers.MathematicalProgram, x: pydrake.symbolic.Variable, y: pydrake.symbolic.Variable, w: pydrake.symbolic.Expression, phi_x: numpy.ndarray[numpy.float64[m, 1]], phi_y: numpy.ndarray[numpy.float64[m, 1]], Bx: numpy.ndarray[object[m, 1]], By: numpy.ndarray[object[m, 1]], binning: pydrake.solvers.IntervalBinning) numpy.ndarray[object[m, n]]
Add constraints to the optimization program, such that the bilinear product x * y is approximated by w, using Special Ordered Set of Type 2 (sos2) constraint. To do so, we assume that the range of x is [x_min, x_max], and the range of y is [y_min, y_max]. We first consider two arrays φˣ, φʸ, satisfying
Click to expand C++ code...
x_min = φˣ₀ < φˣ₁ < ... < φˣₘ = x_max y_min = φʸ₀ < φʸ₁ < ... < φʸₙ = y_max
, and divide the range of x into intervals [φˣ₀, φˣ₁], [φˣ₁, φˣ₂], … , [φˣₘ₋₁, φˣₘ] and the range of y into intervals [φʸ₀, φʸ₁], [φʸ₁, φʸ₂], … , [φʸₙ₋₁, φʸₙ]. The xy plane is thus cut into rectangles, with each rectangle as [φˣᵢ, φˣᵢ₊₁] x [φʸⱼ, φʸⱼ₊₁]. The convex hull of the surface z = x * y for x, y in each rectangle is a tetrahedron. We then approximate the bilinear product x * y with w, such that (x, y, w) is in one of the tetrahedrons.
We use two different encoding schemes on the binary variables, to determine which interval is active. We can choose either linear or logarithmic binning. When using linear binning, for a variable with N intervals, we use N binary variables, and B(i) = 1 indicates the variable is in the i’th interval. When using logarithmic binning, we use ⌈log₂(N)⌉ binary variables. If these binary variables represent integer M in the reflected Gray code, then the continuous variable is in the M’th interval.
- Parameter
prog
: The program to which the bilinear product constraint is added
- Parameter
x
: The decision variable.
- Parameter
y
: The decision variable.
- Parameter
w
: The expression to approximate x * y
- Parameter
phi_x
: The end points of the intervals for
x
.- Parameter
phi_y
: The end points of the intervals for
y
.- Parameter
Bx
: The binary variables for the interval in which x stays encoded as described above.
- Parameter
By
: The binary variables for the interval in which y stays encoded as described above.
- Parameter
binning
: Determine whether to use linear binning or logarithmic binning.
- Returns
lambda The auxiliary continuous variables.
The constraints we impose are
Click to expand C++ code...
x = (φˣ)ᵀ * ∑ⱼ λᵢⱼ y = (φʸ)ᵀ * ∑ᵢ λᵢⱼ w = ∑ᵢⱼ φˣᵢ * φʸⱼ * λᵢⱼ Both ∑ⱼ λᵢⱼ = λ.rowwise().sum() and ∑ᵢ λᵢⱼ = λ.colwise().sum() satisfy SOS2 constraint.
If x ∈ [φx(M), φx(M+1)] and y ∈ [φy(N), φy(N+1)], then only λ(M, N), λ(M + 1, N), λ(M, N + 1) and λ(M+1, N+1) can be strictly positive, all other λ(i, j) are zero.
Note
We DO NOT add the constraint Bx(i) ∈ {0, 1}, By(j) ∈ {0, 1} in this function. It is the user’s responsibility to ensure that these constraints are enforced.
- Parameter
- pydrake.solvers.AddLogarithmicSos1Constraint(prog: pydrake.solvers.MathematicalProgram, num_lambda: int) tuple[numpy.ndarray[object[m, 1]], numpy.ndarray[object[m, 1]]]
Adds the special ordered set of type 1 (SOS1) constraint. Namely
Click to expand C++ code...
λ(0) + ... + λ(n-1) = 1 λ(i) ≥ 0 ∀i ∃ j ∈ {0, 1, ..., n-1}, s.t λ(j) = 1
where one and only one of λ(i) is 1, all other λ(j) are 0. We will need to add ⌈log₂(n)⌉ binary variables, where n is the number of rows in λ. For more information, please refer to Modeling Disjunctive Constraints with a Logarithmic Number of Binary Variables and Constraints by J. Vielma and G. Nemhauser, 2011.
- Parameter
prog
: The program to which the SOS1 constraint is added.
- Parameter
num_lambda
: n in the documentation above.
- Returns
(lambda, y) lambda is λ in the documentation above. Notice that λ are declared as continuous variables, but they only admit binary solutions. y are binary variables of size ⌈log₂(n)⌉. When this sos1 constraint is satisfied, suppose that λ(i)=1 and λ(j)=0 ∀ j≠i, then y is the Reflected Gray code of i. For example, suppose n = 8, i = 5, then y is a vector of size ⌈log₂(n)⌉ = 3, and the value of y is (1, 1, 0) which equals to 5 according to reflected Gray code.
- Parameter
- pydrake.solvers.AddLogarithmicSos2Constraint(prog: pydrake.solvers.MathematicalProgram, lambdas: numpy.ndarray[object[m, 1]], binary_variable_name: str = 'y') numpy.ndarray[object[m, 1]]
Adds the special ordered set 2 (SOS2) constraint,
See also
AddLogarithmicSos2Constraint.
- pydrake.solvers.AddSos2Constraint(prog: pydrake.solvers.MathematicalProgram, lambdas: numpy.ndarray[object[m, 1]], y: numpy.ndarray[object[m, 1]]) None
Adds the special ordered set 2 (SOS2) constraint. y(i) takes binary values (either 0 or 1).
Click to expand C++ code...
y(i) = 1 => λ(i) + λ(i + 1) = 1.
See also
AddLogarithmicSos2Constraint for a complete explanation on SOS2 constraint.
- Parameter
prog
: The optimization program to which the SOS2 constraint is added.
- Parameter
lambda
: At most two entries in λ can be strictly positive, and these two entries have to be adjacent. All other entries are zero. Moreover, these two entries should sum up to 1.
- Parameter
y
: y(i) takes binary value, and determines which two entries in λ can be strictly positive. Throw a runtime error if y.rows() != lambda.rows() - 1.
- Parameter
- class pydrake.solvers.AugmentedLagrangianNonsmooth
Compute the augmented Lagrangian (AL) of a given mathematical program
min f(x) s.t h(x) = 0 l <= g(x) <= u x_lo <= x <= x_up
We first turn it into an equality constrained program with non-negative slack variable s as follows
min f(x) s.t h(x) = 0 c(x) - s = 0 s >= 0
Depending on the option include_x_bounds, the constraint h(x)=0, c(x)>=0 may or may not include the bounding box constraint x_lo <= x <= x_up.
the (non-smooth) augmented Lagrangian is defined as
L(x, λ, μ) = f(x) − λ₁ᵀh(x) + μ/2 h(x)ᵀh(x) - λ₂ᵀ(c(x)-s) + μ/2 (c(x)-s)ᵀ(c(x)-s)
where s = max(c(x) - λ₂/μ, 0).
For more details, refer to section 17.4 of Numerical Optimization by Jorge Nocedal and Stephen Wright, Edition 1, 1999 (This formulation isn’t presented in Edition 2, but to stay consistent with Edition 2, we use μ/2 as the coefficient of the quadratic penalty term instead of 1/(2μ) in Edition 1). Note that the augmented Lagrangian L(x, λ, μ) is NOT a smooth function of x, since s = max(c(x) - λ₂/μ, 0) is non-smooth at c(x) - λ₂/μ = 0.
- __init__(self: pydrake.solvers.AugmentedLagrangianNonsmooth, prog: pydrake.solvers.MathematicalProgram, include_x_bounds: bool) None
- Parameter
prog
: The mathematical program we will evaluate.
- Parameter
include_x_bounds
: . Whether the Lagrangian and the penalty for the bounds x_lo <= x <= x_up are included in the augmented Lagrangian L(x, λ, μ) or not.
- Parameter
- Eval(*args, **kwargs)
Overloaded function.
Eval(self: pydrake.solvers.AugmentedLagrangianNonsmooth, x: numpy.ndarray[numpy.float64[m, 1]], lambda_val: numpy.ndarray[numpy.float64[m, 1]], mu: float) -> tuple[float, numpy.ndarray[numpy.float64[m, 1]], float]
- Parameter
x
: The value of all the decision variables.
- Parameter
lambda_val
: The estimated Lagrangian multipliers. The order of the Lagrangian multiplier is as this: We first call to evaluate all constraints. Then for each row of the constraint, if it is an equality constraint, we append one single Lagrangian multiplier. Otherwise we append the Lagrangian multiplier for the lower and upper bounds (where the lower comes before the upper), if the corresponding bound is not ±∞. The order of evaluating all the constraints is the same as prog.GetAllConstraints() except for prog.bounding_box_constraints(). If include_x_bounds=true, then we aggregate all the bounding_box_constraints() and evaluate them at the end of all constraints.
- Parameter
mu
: μ in the documentation above. The constant for penalty term weight. This should be a strictly positive number.
- Parameter
constraint_residue
: The value of the all the constraints. For an equality constraint c(x)=0 or the inequality constraint c(x)>= 0, the residue is c(x). Depending on include_x_bounds,
constraint_residue
may or may not contain the residue for bounding box constraints x_lo <= x <= x_up at the end.- Parameter
cost
: The value of the cost function f(x).
- Returns
The evaluated Augmented Lagrangian (AL) L(x, λ, μ).
Eval(self: pydrake.solvers.AugmentedLagrangianNonsmooth, x: numpy.ndarray[object[m, 1]], lambda_val: numpy.ndarray[numpy.float64[m, 1]], mu: float) -> tuple[pydrake.autodiffutils.AutoDiffXd, numpy.ndarray[object[m, 1]], pydrake.autodiffutils.AutoDiffXd]
- Parameter
x
: The value of all the decision variables.
- Parameter
lambda_val
: The estimated Lagrangian multipliers. The order of the Lagrangian multiplier is as this: We first call to evaluate all constraints. Then for each row of the constraint, if it is an equality constraint, we append one single Lagrangian multiplier. Otherwise we append the Lagrangian multiplier for the lower and upper bounds (where the lower comes before the upper), if the corresponding bound is not ±∞. The order of evaluating all the constraints is the same as prog.GetAllConstraints() except for prog.bounding_box_constraints(). If include_x_bounds=true, then we aggregate all the bounding_box_constraints() and evaluate them at the end of all constraints.
- Parameter
mu
: μ in the documentation above. The constant for penalty term weight. This should be a strictly positive number.
- Parameter
constraint_residue
: The value of the all the constraints. For an equality constraint c(x)=0 or the inequality constraint c(x)>= 0, the residue is c(x). Depending on include_x_bounds,
constraint_residue
may or may not contain the residue for bounding box constraints x_lo <= x <= x_up at the end.- Parameter
cost
: The value of the cost function f(x).
- Returns
The evaluated Augmented Lagrangian (AL) L(x, λ, μ).
- include_x_bounds(self: pydrake.solvers.AugmentedLagrangianNonsmooth) bool
- Returns
Whether the bounding box constraint x_lo <= x <= x_up is included in the augmented Lagrangian L(x, λ, μ).
- is_equality(self: pydrake.solvers.AugmentedLagrangianNonsmooth) list[bool]
- Returns
Whether each constraint is equality or not. The order of the constraint is explained in the class documentation.
- lagrangian_size(self: pydrake.solvers.AugmentedLagrangianNonsmooth) int
- Returns
The size of the Lagrangian multiplier λ.
- prog(self: pydrake.solvers.AugmentedLagrangianNonsmooth) pydrake.solvers.MathematicalProgram
- Returns
The mathematical program for which the augmented Lagrangian is computed.
- x_lo(self: pydrake.solvers.AugmentedLagrangianNonsmooth) numpy.ndarray[numpy.float64[m, 1]]
- Returns
all the lower bounds of x.
- x_up(self: pydrake.solvers.AugmentedLagrangianNonsmooth) numpy.ndarray[numpy.float64[m, 1]]
- Returns
all the upper bounds of x.
- class pydrake.solvers.AugmentedLagrangianSmooth
Compute the augmented Lagrangian (AL) of a given mathematical program
min f(x) s.t h(x) = 0 l <= g(x) <= u x_lo <= x <= x_up
We first turn it into an equality constrained program with non-negative slack variable s as follows
min f(x) s.t h(x) = 0 c(x) - s = 0 s >= 0
We regard this as an optimization problem on variable (x, s), with equality constraints h(x) = 0, c(x)-s = 0, and the bound constraint s >= 0.
Depending on the option include_x_bounds, the constraint h(x)=0, c(x)>=0 may or may not include the bounding box constraint x_lo <= x <= x_up.
The (smooth) augmented Lagrangian is defined as
L(x, s, λ, μ) = f(x) − λ₁ᵀh(x) + μ/2 h(x)ᵀh(x) - λ₂ᵀ(c(x)-s) + μ/2 (c(x)-s)ᵀ(c(x)-s)
For more details, refer to section 17.4 of Numerical Optimization by Jorge Nocedal and Stephen Wright, Edition 2, 2006. Note that the augmented Lagrangian L(x, s, λ, μ) is a smooth function of (x, s),
This is the implementation used in LANCELOT. To solve the nonlinear optimization through this Augmented Lagrangian, the nonlinear solve should be able to handle bounding box constraints on the decision variables.
- __init__(self: pydrake.solvers.AugmentedLagrangianSmooth, prog: pydrake.solvers.MathematicalProgram, include_x_bounds: bool) None
- Parameter
prog
: The mathematical program we will evaluate.
- Parameter
include_x_bounds
: . Whether the Lagrangian and the penalty for the bounds x_lo <= x <= x_up are included in the augmented Lagrangian L(x, s, λ, μ) or not.
- Parameter
- Eval(*args, **kwargs)
Overloaded function.
Eval(self: pydrake.solvers.AugmentedLagrangianSmooth, x: numpy.ndarray[numpy.float64[m, 1]], s: numpy.ndarray[numpy.float64[m, 1]], lambda_val: numpy.ndarray[numpy.float64[m, 1]], mu: float) -> tuple[float, numpy.ndarray[numpy.float64[m, 1]], float]
- Parameter
x
: The value of all the decision variables in prog().
- Parameter
s
: The value of all slack variables s.
- Parameter
lambda_val
: The estimated Lagrangian multipliers. The order of the Lagrangian multiplier is as follows: We first call to evaluate all constraints. Then for each row of the constraint, if it is an equality constraint, we append one single Lagrangian multiplier. Otherwise we append the Lagrangian multiplier for the lower and upper bounds (where the lower comes before the upper), if the corresponding bound is not ±∞. The order of evaluating all the constraints is the same as prog.GetAllConstraints() except for prog.bounding_box_constraints(). If include_x_bounds=true, then we aggregate all the bounding_box_constraints() and evaluate them at the end of all constraints.
- Parameter
mu
: μ in the documentation above. The constant for penalty term weight. This should be a strictly positive number.
- Parameter
constraint_residue
: The value of the all the constraints. For an equality constraint c(x)=0, the residue is c(x); for an inequality constraint c(x)>=0, the residue is c(x)-s where s is the corresponding slack variable. Depending on include_x_bounds,
constraint_residue
may or may not contain the residue for bounding box constraints x_lo <= x <= x_up at the end.- Parameter
cost
: The value of the cost function f(x).
- Returns
The evaluated Augmented Lagrangian (AL) L(x, s, λ, μ).
Note
This Eval function differs from AugmentedLagrangianNonsmooth::Eval() function as
s
is an input argument.Eval(self: pydrake.solvers.AugmentedLagrangianSmooth, x: numpy.ndarray[object[m, 1]], s: numpy.ndarray[object[m, 1]], lambda_val: numpy.ndarray[numpy.float64[m, 1]], mu: float) -> tuple[pydrake.autodiffutils.AutoDiffXd, numpy.ndarray[object[m, 1]], pydrake.autodiffutils.AutoDiffXd]
- Parameter
x
: The value of all the decision variables in prog().
- Parameter
s
: The value of all slack variables s.
- Parameter
lambda_val
: The estimated Lagrangian multipliers. The order of the Lagrangian multiplier is as follows: We first call to evaluate all constraints. Then for each row of the constraint, if it is an equality constraint, we append one single Lagrangian multiplier. Otherwise we append the Lagrangian multiplier for the lower and upper bounds (where the lower comes before the upper), if the corresponding bound is not ±∞. The order of evaluating all the constraints is the same as prog.GetAllConstraints() except for prog.bounding_box_constraints(). If include_x_bounds=true, then we aggregate all the bounding_box_constraints() and evaluate them at the end of all constraints.
- Parameter
mu
: μ in the documentation above. The constant for penalty term weight. This should be a strictly positive number.
- Parameter
constraint_residue
: The value of the all the constraints. For an equality constraint c(x)=0, the residue is c(x); for an inequality constraint c(x)>=0, the residue is c(x)-s where s is the corresponding slack variable. Depending on include_x_bounds,
constraint_residue
may or may not contain the residue for bounding box constraints x_lo <= x <= x_up at the end.- Parameter
cost
: The value of the cost function f(x).
- Returns
The evaluated Augmented Lagrangian (AL) L(x, s, λ, μ).
Note
This Eval function differs from AugmentedLagrangianNonsmooth::Eval() function as
s
is an input argument.
- include_x_bounds(self: pydrake.solvers.AugmentedLagrangianSmooth) bool
- Returns
Whether the bounding box constraint x_lo <= x <= x_up is included in the augmented Lagrangian L(x, λ, μ).
- is_equality(self: pydrake.solvers.AugmentedLagrangianSmooth) list[bool]
- Returns
Whether each constraint is equality or not. The order of the constraint is explained in the class documentation.
- lagrangian_size(self: pydrake.solvers.AugmentedLagrangianSmooth) int
- Returns
The size of the Lagrangian multiplier λ.
- prog(self: pydrake.solvers.AugmentedLagrangianSmooth) pydrake.solvers.MathematicalProgram
- Returns
The mathematical program for which the augmented Lagrangian is computed.
- s_size(self: pydrake.solvers.AugmentedLagrangianSmooth) int
- Returns
The size of the slack variable s.
- x_lo(self: pydrake.solvers.AugmentedLagrangianSmooth) numpy.ndarray[numpy.float64[m, 1]]
- Returns
All the lower bounds of x.
- x_up(self: pydrake.solvers.AugmentedLagrangianSmooth) numpy.ndarray[numpy.float64[m, 1]]
- Returns
All the upper bounds of x.
- template pydrake.solvers.Binding
Instantiations:
Binding[EvaluatorBase]
,Binding[Constraint]
,Binding[LinearConstraint]
,Binding[QuadraticConstraint]
,Binding[LorentzConeConstraint]
,Binding[RotatedLorentzConeConstraint]
,Binding[LinearEqualityConstraint]
,Binding[BoundingBoxConstraint]
,Binding[PositiveSemidefiniteConstraint]
,Binding[LinearMatrixInequalityConstraint]
,Binding[LinearComplementarityConstraint]
,Binding[ExponentialConeConstraint]
,Binding[MinimumValueLowerBoundConstraint]
,Binding[MinimumValueUpperBoundConstraint]
,Binding[ExpressionConstraint]
,Binding[Cost]
,Binding[LinearCost]
,Binding[QuadraticCost]
,Binding[L1NormCost]
,Binding[L2NormCost]
,Binding[LInfNormCost]
,Binding[PerspectiveQuadraticCost]
,Binding[ExpressionCost]
,Binding[VisualizationCallback]
- class pydrake.solvers.Binding[BoundingBoxConstraint]
- __init__(self: pydrake.solvers.Binding[BoundingBoxConstraint], c: pydrake.solvers.BoundingBoxConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[BoundingBoxConstraint]) pydrake.solvers.BoundingBoxConstraint
- ToLatex(self: pydrake.solvers.Binding[BoundingBoxConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[BoundingBoxConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[Constraint]
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: pydrake.solvers.Binding[Constraint], c: pydrake.solvers.Constraint, v: numpy.ndarray[object[m, 1]]) -> None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.__init__(self: pydrake.solvers.Binding[Constraint], arg0: object) -> None
- evaluator(self: pydrake.solvers.Binding[Constraint]) pydrake.solvers.Constraint
- ToLatex(self: pydrake.solvers.Binding[Constraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[Constraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[Cost]
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: pydrake.solvers.Binding[Cost], c: pydrake.solvers.Cost, v: numpy.ndarray[object[m, 1]]) -> None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.__init__(self: pydrake.solvers.Binding[Cost], arg0: object) -> None
- evaluator(self: pydrake.solvers.Binding[Cost]) pydrake.solvers.Cost
- ToLatex(self: pydrake.solvers.Binding[Cost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[Cost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[EvaluatorBase]
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: pydrake.solvers.Binding[EvaluatorBase], c: pydrake.solvers.EvaluatorBase, v: numpy.ndarray[object[m, 1]]) -> None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.__init__(self: pydrake.solvers.Binding[EvaluatorBase], arg0: object) -> None
- evaluator(self: pydrake.solvers.Binding[EvaluatorBase]) pydrake.solvers.EvaluatorBase
- ToLatex(self: pydrake.solvers.Binding[EvaluatorBase], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[EvaluatorBase]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[ExponentialConeConstraint]
- __init__(self: pydrake.solvers.Binding[ExponentialConeConstraint], c: pydrake.solvers.ExponentialConeConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[ExponentialConeConstraint]) pydrake.solvers.ExponentialConeConstraint
- ToLatex(self: pydrake.solvers.Binding[ExponentialConeConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[ExponentialConeConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[ExpressionConstraint]
- __init__(self: pydrake.solvers.Binding[ExpressionConstraint], c: pydrake.solvers.ExpressionConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[ExpressionConstraint]) pydrake.solvers.ExpressionConstraint
- ToLatex(self: pydrake.solvers.Binding[ExpressionConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[ExpressionConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[ExpressionCost]
- __init__(self: pydrake.solvers.Binding[ExpressionCost], c: pydrake.solvers.ExpressionCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[ExpressionCost]) pydrake.solvers.ExpressionCost
- ToLatex(self: pydrake.solvers.Binding[ExpressionCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[ExpressionCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[L1NormCost]
- __init__(self: pydrake.solvers.Binding[L1NormCost], c: pydrake.solvers.L1NormCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[L1NormCost]) pydrake.solvers.L1NormCost
- ToLatex(self: pydrake.solvers.Binding[L1NormCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[L1NormCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[L2NormCost]
- __init__(self: pydrake.solvers.Binding[L2NormCost], c: pydrake.solvers.L2NormCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[L2NormCost]) pydrake.solvers.L2NormCost
- ToLatex(self: pydrake.solvers.Binding[L2NormCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[L2NormCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LinearComplementarityConstraint]
- __init__(self: pydrake.solvers.Binding[LinearComplementarityConstraint], c: pydrake.solvers.LinearComplementarityConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LinearComplementarityConstraint]) pydrake.solvers.LinearComplementarityConstraint
- ToLatex(self: pydrake.solvers.Binding[LinearComplementarityConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LinearComplementarityConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LinearConstraint]
- __init__(self: pydrake.solvers.Binding[LinearConstraint], c: pydrake.solvers.LinearConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LinearConstraint]) pydrake.solvers.LinearConstraint
- ToLatex(self: pydrake.solvers.Binding[LinearConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LinearConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LinearCost]
- __init__(self: pydrake.solvers.Binding[LinearCost], c: pydrake.solvers.LinearCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LinearCost]) pydrake.solvers.LinearCost
- ToLatex(self: pydrake.solvers.Binding[LinearCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LinearCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LinearEqualityConstraint]
- __init__(self: pydrake.solvers.Binding[LinearEqualityConstraint], c: pydrake.solvers.LinearEqualityConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LinearEqualityConstraint]) pydrake.solvers.LinearEqualityConstraint
- ToLatex(self: pydrake.solvers.Binding[LinearEqualityConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LinearEqualityConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LinearMatrixInequalityConstraint]
- __init__(self: pydrake.solvers.Binding[LinearMatrixInequalityConstraint], c: pydrake.solvers.LinearMatrixInequalityConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LinearMatrixInequalityConstraint]) pydrake.solvers.LinearMatrixInequalityConstraint
- ToLatex(self: pydrake.solvers.Binding[LinearMatrixInequalityConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LinearMatrixInequalityConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LInfNormCost]
- __init__(self: pydrake.solvers.Binding[LInfNormCost], c: pydrake.solvers.LInfNormCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LInfNormCost]) pydrake.solvers.LInfNormCost
- ToLatex(self: pydrake.solvers.Binding[LInfNormCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LInfNormCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[LorentzConeConstraint]
- __init__(self: pydrake.solvers.Binding[LorentzConeConstraint], c: pydrake.solvers.LorentzConeConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[LorentzConeConstraint]) pydrake.solvers.LorentzConeConstraint
- ToLatex(self: pydrake.solvers.Binding[LorentzConeConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[LorentzConeConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[MinimumValueLowerBoundConstraint]
- __init__(self: pydrake.solvers.Binding[MinimumValueLowerBoundConstraint], c: pydrake.solvers.MinimumValueLowerBoundConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[MinimumValueLowerBoundConstraint]) pydrake.solvers.MinimumValueLowerBoundConstraint
- ToLatex(self: pydrake.solvers.Binding[MinimumValueLowerBoundConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[MinimumValueLowerBoundConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[MinimumValueUpperBoundConstraint]
- __init__(self: pydrake.solvers.Binding[MinimumValueUpperBoundConstraint], c: pydrake.solvers.MinimumValueUpperBoundConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[MinimumValueUpperBoundConstraint]) pydrake.solvers.MinimumValueUpperBoundConstraint
- ToLatex(self: pydrake.solvers.Binding[MinimumValueUpperBoundConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[MinimumValueUpperBoundConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[PerspectiveQuadraticCost]
- __init__(self: pydrake.solvers.Binding[PerspectiveQuadraticCost], c: pydrake.solvers.PerspectiveQuadraticCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[PerspectiveQuadraticCost]) pydrake.solvers.PerspectiveQuadraticCost
- ToLatex(self: pydrake.solvers.Binding[PerspectiveQuadraticCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[PerspectiveQuadraticCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[PositiveSemidefiniteConstraint]
- __init__(self: pydrake.solvers.Binding[PositiveSemidefiniteConstraint], c: pydrake.solvers.PositiveSemidefiniteConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[PositiveSemidefiniteConstraint]) pydrake.solvers.PositiveSemidefiniteConstraint
- ToLatex(self: pydrake.solvers.Binding[PositiveSemidefiniteConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[PositiveSemidefiniteConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[QuadraticConstraint]
- __init__(self: pydrake.solvers.Binding[QuadraticConstraint], c: pydrake.solvers.QuadraticConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[QuadraticConstraint]) pydrake.solvers.QuadraticConstraint
- ToLatex(self: pydrake.solvers.Binding[QuadraticConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[QuadraticConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[QuadraticCost]
- __init__(self: pydrake.solvers.Binding[QuadraticCost], c: pydrake.solvers.QuadraticCost, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[QuadraticCost]) pydrake.solvers.QuadraticCost
- ToLatex(self: pydrake.solvers.Binding[QuadraticCost], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[QuadraticCost]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[RotatedLorentzConeConstraint]
- __init__(self: pydrake.solvers.Binding[RotatedLorentzConeConstraint], c: pydrake.solvers.RotatedLorentzConeConstraint, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[RotatedLorentzConeConstraint]) pydrake.solvers.RotatedLorentzConeConstraint
- ToLatex(self: pydrake.solvers.Binding[RotatedLorentzConeConstraint], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[RotatedLorentzConeConstraint]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.Binding[VisualizationCallback]
- __init__(self: pydrake.solvers.Binding[VisualizationCallback], c: pydrake.solvers.VisualizationCallback, v: numpy.ndarray[object[m, 1]]) None
Concatenates each VectorDecisionVariable object in
v
into a single column vector, binds this column vector of decision variables with the constraintc
.
- evaluator(self: pydrake.solvers.Binding[VisualizationCallback]) pydrake.solvers.VisualizationCallback
- ToLatex(self: pydrake.solvers.Binding[VisualizationCallback], precision: int = 3) str
Returns a LaTeX description of this Binding. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- variables(self: pydrake.solvers.Binding[VisualizationCallback]) numpy.ndarray[object[m, 1]]
- class pydrake.solvers.BoundingBoxConstraint
Bases:
pydrake.solvers.LinearConstraint
Implements a constraint of the form \(lb <= x <= ub\)
Note: the base Constraint class (as implemented at the moment) could play this role. But this class enforces that it is ONLY a bounding box constraint, and not something more general. Some solvers use this information to handle bounding box constraints differently than general constraints, so use of this form is encouraged.
- __init__(self: pydrake.solvers.BoundingBoxConstraint, lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]]) None
- pydrake.solvers.ChooseBestSolver(prog: pydrake.solvers.MathematicalProgram) pydrake.solvers.SolverId
Choose the best solver given the formulation in the optimization program and the availability of the solvers.
- Raises
RuntimeError if there is no available solver for prog. –
- class pydrake.solvers.ClarabelSolver
Bases:
pydrake.solvers.SolverInterface
An interface to wrap Clarabel https://github.com/oxfordcontrol/Clarabel.cpp
- __init__(self: pydrake.solvers.ClarabelSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.ClarabelSolverDetails
The Clarabel solver details after calling the Solve() function. The user can call MathematicalProgramResult::get_solver_details<ClarabelSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property iterations
Number of iterations in Clarabel.
- property solve_time
The solve time inside Clarabel in seconds.
- property status
The status from Clarabel.
- class pydrake.solvers.ClpSolver
Bases:
pydrake.solvers.SolverInterface
A wrapper to call CLP using Drake’s MathematicalProgram.
Note
Currently our ClpSolver has a memory issue when solving a QP. The user should be aware of this risk.
Note
The authors can adjust the problem scaling option by setting “scaling” as mentioned in https://github.com/coin-or/Clp/blob/43129ba1a7fd66ce70fe0761fcd696951917ed2e/src/ClpModel.hpp#L705-L706 For example prog.SetSolverOption(ClpSolver::id(), “scaling”, 0); will do “no scaling”. The default is 1.
- __init__(self: pydrake.solvers.ClpSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.ClpSolverDetails
The CLP solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<ClpSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property status
Refer to ClpModel::status() function for the meaning of the status code. - -1: unknown error. - 0: optimal. - 1: primal infeasible - 2: dual infeasible - 3: stopped on iterations or time. - 4: stopped due to errors - 5: stopped by event handler
- class pydrake.solvers.CommonSolverOption
Some options can be applied to not one solver, but many solvers (for example, many solvers support printing out the progress in each iteration). CommonSolverOption contain the names of these supported options. The user can use these options as “key” in SolverOption::SetOption().
Members:
kPrintFileName : Many solvers support printing the progress of each iteration to a
file. The user can call SolverOptions::SetOption(kPrintFileName, file_name) where file_name is a string. If the user doesn’t want to print to a file, then use SolverOptions::SetOption(kPrintFileName, “”), where the empty string “” indicates no print.
kPrintToConsole : Many solvers support printing the progress of each iteration to the
console, the user can call SolverOptions::SetOption(kPrintToConsole, 1) to turn on printing to the console, or SolverOptions::SetOption(kPrintToConsole, 0) to turn off printing to the console.
kStandaloneReproductionFileName : Some of our solver interfaces support writing a standalone (e.g. it
does not depend on Drake) minimal reproduction of the problem to a file. This is especially useful for sending bug reports upstream to the developers of the solver. To enable this, use e.g. SolverOptions::SetOption(kStandaloneReproductionFileName, “reproduction.txt”). To disable, use SolverOptions::SetOption(kStandaloneReproductionFileName, “”), where the empty string “” indicates that no file should be written.
- __init__(self: pydrake.solvers.CommonSolverOption, value: int) None
- kPrintFileName = <CommonSolverOption.kPrintFileName: 0>
- kPrintToConsole = <CommonSolverOption.kPrintToConsole: 1>
- kStandaloneReproductionFileName = <CommonSolverOption.kStandaloneReproductionFileName: 2>
- property name
- property value
- class pydrake.solvers.Constraint
Bases:
pydrake.solvers.EvaluatorBase
A constraint is a function + lower and upper bounds.
Solver interfaces must acknowledge that these constraints are mutable. Parameters can change after the constraint is constructed and before the call to Solve().
It should support evaluating the constraint, and adding it to an optimization problem.
- __init__(*args, **kwargs)
- CheckSatisfied(*args, **kwargs)
Overloaded function.
CheckSatisfied(self: pydrake.solvers.Constraint, x: numpy.ndarray[numpy.float64[m, 1]], tol: float = 1e-06) -> bool
Return whether this constraint is satisfied by the given value,
x
.- Parameter
x
: A
num_vars
x 1 vector.- Parameter
tol
: A tolerance for bound checking.
- Raises
RuntimeError if the size of x isn't correct. –
CheckSatisfied(self: pydrake.solvers.Constraint, x: numpy.ndarray[object[m, 1]], tol: float = 1e-06) -> bool
Return whether this constraint is satisfied by the given value,
x
.- Parameter
x
: A
num_vars
x 1 vector.- Parameter
tol
: A tolerance for bound checking.
- Raises
RuntimeError if the size of x isn't correct. –
CheckSatisfied(self: pydrake.solvers.Constraint, x: numpy.ndarray[object[m, 1]]) -> pydrake.symbolic.Formula
Return whether this constraint is satisfied by the given value,
x
.- Parameter
x
: A
num_vars
x 1 vector.- Parameter
tol
: A tolerance for bound checking.
- Raises
RuntimeError if the size of x isn't correct. –
- CheckSatisfiedVectorized(self: pydrake.solvers.Constraint, x: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], tol: float) list[bool]
A “vectorized” version of CheckSatisfied. It evaluates the constraint for every column of
x
.
- lower_bound(self: pydrake.solvers.Constraint) numpy.ndarray[numpy.float64[m, 1]]
- num_constraints(self: pydrake.solvers.Constraint) int
Number of rows in the output constraint.
- upper_bound(self: pydrake.solvers.Constraint) numpy.ndarray[numpy.float64[m, 1]]
- class pydrake.solvers.Cost
Bases:
pydrake.solvers.EvaluatorBase
Provides an abstract base for all costs.
- __init__(*args, **kwargs)
- class pydrake.solvers.CsdpSolver
Bases:
pydrake.solvers.SolverInterface
Wrap CSDP solver such that it can solve a drake::solvers::MathematicalProgram.
Note
CSDP doesn’t accept free variables, while drake::solvers::MathematicalProgram does. In order to convert MathematicalProgram into CSDP format, we provide several approaches to remove free variables. You can set the approach through
Click to expand C++ code...
{cc} SolverOptions solver_options; solver_options.SetOption(CsdpSolver::id(), "drake::RemoveFreeVariableMethod", static_cast<int>(RemoveFreeVariableMethod::kNullspace)); CsdpSolver solver; auto result = solver.Solve(prog, std::nullopt, solver_options);
For more details, check out RemoveFreeVariableMethod.
- __init__(self: pydrake.solvers.CsdpSolver) None
Default constructor
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.CsdpSolverDetails
The CSDP solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<CsdpSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property dual_objective
The dual objective value.
- property primal_objective
The primal objective value.
- property return_code
Refer to the Return Codes section of CSDP 6.2.0 User’s Guide for explanation on the return code. Some of the common return codes are
0 Problem is solved to optimality. 1 Problem is primal infeasible. 2 Problem is dual infeasible. 3 Problem solved to near optimality. 4 Maximum iterations reached. 5 Stuck at edge of primal feasibility. 6 Stuck at edge of dual feasibility. 7 Lack of progress. 8 X, Z, or O is singular. 9 NaN or Inf values encountered.
- property y_val
CSDP solves a primal problem of the form
max tr(C*X) s.t tr(Aᵢ*X) = aᵢ X ≽ 0
The dual form is
min aᵀy s.t ∑ᵢ yᵢAᵢ - C = Z Z ≽ 0
y, Z are the variables for the dual problem. y_val, Z_val are the solutions to the dual problem.
- property Z_val
- class pydrake.solvers.EvaluatorBase
- __init__(*args, **kwargs)
- Eval(*args, **kwargs)
Overloaded function.
Eval(self: pydrake.solvers.EvaluatorBase, x: numpy.ndarray[numpy.float64[m, 1]]) -> numpy.ndarray[numpy.float64[m, 1]]
Evaluates the expression.
- Parameter
x
: A
num_vars
x 1 input vector.- Parameter
y
: A
num_outputs
x 1 output vector.
Eval(self: pydrake.solvers.EvaluatorBase, x: numpy.ndarray[object[m, 1]]) -> numpy.ndarray[object[m, 1]]
Evaluates the expression.
- Parameter
x
: A
num_vars
x 1 input vector.- Parameter
y
: A
num_outputs
x 1 output vector.
Eval(self: pydrake.solvers.EvaluatorBase, x: numpy.ndarray[object[m, 1]]) -> numpy.ndarray[object[m, 1]]
Evaluates the expression.
- Parameter
x
: A
num_vars
x 1 input vector.- Parameter
y
: A
num_outputs
x 1 output vector.
- get_description(self: pydrake.solvers.EvaluatorBase) str
Getter for a human-friendly description for the evaluator.
- gradient_sparsity_pattern(self: pydrake.solvers.EvaluatorBase) Optional[list[tuple[int, int]]]
Returns the vector of (row_index, col_index) that contains all the entries in the gradient of Eval function (∂y/∂x) whose value could be non-zero, namely if ∂yᵢ/∂xⱼ could be non-zero, then the pair (i, j) is in gradient_sparsity_pattern.
- Returns
gradient_sparsity_pattern
: If nullopt, then we regard all entries of the gradient as potentially non-zero.
- Returns
- is_thread_safe(self: pydrake.solvers.EvaluatorBase) bool
Returns whether it is safe to call Eval in parallel.
- num_outputs(self: pydrake.solvers.EvaluatorBase) int
Getter for the number of outputs, namely the number of rows in y, as used in Eval(x, y).
- num_vars(self: pydrake.solvers.EvaluatorBase) int
Getter for the number of variables, namely the number of rows in x, as used in Eval(x, y).
- set_description(self: pydrake.solvers.EvaluatorBase, arg0: str) None
Set a human-friendly description for the evaluator.
- SetGradientSparsityPattern(self: pydrake.solvers.EvaluatorBase, gradient_sparsity_pattern: list[tuple[int, int]]) None
Set the sparsity pattern of the gradient matrix ∂y/∂x (the gradient of y value in Eval, w.r.t x in Eval) . gradient_sparsity_pattern contains all the pairs of (row_index, col_index) for which the corresponding entries could have non-zero value in the gradient matrix ∂y/∂x.
- ToLatex(self: pydrake.solvers.EvaluatorBase, vars: numpy.ndarray[object[m, 1]], precision: int = 3) str
Returns a LaTeX string describing this evaluator. Does not include any characters to enter/exit math mode; you might want, e.g. “$$” + evaluator.ToLatex() + “$$”.
- class pydrake.solvers.ExponentialConeConstraint
Bases:
pydrake.solvers.Constraint
An exponential cone constraint is a special type of convex cone constraint. We constrain A * x + b to be in the exponential cone, where A has 3 rows, and b is in ℝ³, x is the decision variable. A vector z in ℝ³ is in the exponential cone, if {z₀, z₁, z₂ | z₀ ≥ z₁ * exp(z₂ / z₁), z₁ > 0}. Equivalently, this constraint can be refomulated with logarithm function {z₀, z₁, z₂ | z₂ ≤ z₁ * log(z₀ / z₁), z₀ > 0, z₁ > 0}
The Eval function implemented in this class is z₀ - z₁ * exp(z₂ / z₁) >= 0, z₁ > 0 where z = A * x + b. It is not recommended to solve an exponential cone constraint through generic nonlinear optimization. It is possible that the nonlinear solver can accidentally set z₁ = 0, where the constraint is not well defined. Instead, the user should consider to solve the program through conic solvers that can exploit exponential cone, such as MOSEK™ and SCS.
- __init__(self: pydrake.solvers.ExponentialConeConstraint, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[3, 1]]) None
Constructor for exponential cone. Constrains A * x + b to be in the exponential cone.
- Precondition:
A has 3 rows.
- A(self: pydrake.solvers.ExponentialConeConstraint) numpy.ndarray[numpy.float64[m, n]]
Getter for matrix A.
- b(self: pydrake.solvers.ExponentialConeConstraint) numpy.ndarray[numpy.float64[3, 1]]
Getter for vector b.
- class pydrake.solvers.ExpressionConstraint
Bases:
pydrake.solvers.Constraint
Impose a generic (potentially nonlinear) constraint represented as a vector of symbolic Expression. Expression::Evaluate is called on every constraint evaluation.
Uses symbolic::Jacobian to provide the gradients to the AutoDiff method.
- __init__(self: pydrake.solvers.ExpressionConstraint, v: numpy.ndarray[object[m, 1]], lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]]) None
- expressions(self: pydrake.solvers.ExpressionConstraint) numpy.ndarray[object[m, 1]]
- Returns
the symbolic expressions.
- vars(self: pydrake.solvers.ExpressionConstraint) numpy.ndarray[object[m, 1]]
- Returns
the list of the variables involved in the vector of expressions, in the order that they are expected to be received during DoEval. Any Binding that connects this constraint to decision variables should pass this list of variables to the Binding.
- class pydrake.solvers.ExpressionCost
Bases:
pydrake.solvers.Cost
Impose a generic (potentially nonlinear) cost represented as a symbolic Expression. Expression::Evaluate is called on every constraint evaluation.
Uses symbolic::Jacobian to provide the gradients to the AutoDiff method.
- __init__(self: pydrake.solvers.ExpressionCost, e: pydrake.symbolic.Expression) None
- expression(self: pydrake.solvers.ExpressionCost) pydrake.symbolic.Expression
- Returns
the symbolic expression.
- vars(self: pydrake.solvers.ExpressionCost) numpy.ndarray[object[m, 1]]
- Returns
the list of the variables involved in the vector of expressions, in the order that they are expected to be received during DoEval. Any Binding that connects this constraint to decision variables should pass this list of variables to the Binding.
- pydrake.solvers.GenerateSDPA(prog: pydrake.solvers.MathematicalProgram, file_name: str, method: pydrake.solvers.RemoveFreeVariableMethod = <RemoveFreeVariableMethod.kNullspace: 2>) bool
SDPA is a format to record an SDP problem
max tr(C*X) s.t tr(Aᵢ*X) = gᵢ X ≽ 0
or the dual of the problem
min gᵀy s.t ∑ᵢ yᵢAᵢ - C ≽ 0
where X is a symmetric block diagonal matrix. The format is described in http://plato.asu.edu/ftp/sdpa_format.txt. Many solvers, such as CSDP, DSDP, SDPA, sedumi and SDPT3, accept an SDPA format file as the input. This function reads a MathematicalProgram that can be formulated as above, and write an SDPA file.
- Parameter
prog
: a program that contains an optimization program.
- Parameter
file_name
: The name of the file, note that the extension will be added automatically.
- Parameter
method
: If
prog
contains free variables (i.e., variables without bounds), then we need to remove these free variables to write the program in the SDPA format. Please refer to RemoveFreeVariableMethod for details on how to remove the free variables. $*Default:* is RemoveFreeVariableMethod::kNullspace.- Returns
is_success
: . Returns true if we can generate the SDPA file. The failure could be 1.
prog
cannot be captured by the formulation above. 2.prog
cannot create a file with the given name, etc.
- Parameter
- pydrake.solvers.GetAvailableSolvers(prog_type: pydrake.solvers.ProgramType) list[pydrake.solvers.SolverId]
Returns the list of available and enabled solvers that definitely accept all programs of the given program type. The order of the returned SolverIds reflects an approximate order of preference, from most preferred (front) to least preferred (back). Because we are analyzing only based on the program type rather than a specific program, it’s possible that solvers later in the list would perform better in certain situations. To obtain the truly best solver, using ChooseBestSolver() instead.
Note
If a solver only accepts a subset of the program type, then that solver is not included in the returned results. For example EqualityConstrainedQPSolver doesn’t accept programs with inequality linear constraints, so it doesn’t show up in the return of GetAvailableSolvers(ProgramType::kQP).
- pydrake.solvers.GetProgramType(arg0: pydrake.solvers.MathematicalProgram) pydrake.solvers.ProgramType
Returns the type of the optimization program (LP, QP, etc), based on the properties of its cost/constraints/variables. Each mathematical program should be characterized by a unique type. If a program can be characterized as either type A or type B (for example, a program with linear constraint and linear costs can be characterized as either an LP or an SDP), then we choose the type corresponding to a smaller set of programs (LP in this case).
- class pydrake.solvers.GurobiSolver
Bases:
pydrake.solvers.SolverInterface
An implementation of SolverInterface for the commercially-licensed Gurobi solver (https://www.gurobi.com/).
The default build of Drake is not configured to use Gurobi, so therefore SolverInterface::available() will return false. You must compile Drake from source in order to link against Gurobi. For details, refer to the documentation at https://drake-mit-edu.ezproxy.canberra.edu.au/bazel.html#proprietary-solvers.
The GRB_LICENSE_FILE environment variable controls whether or not SolverInterface::enabled() returns true. If it is set to any non-empty value, then the solver is enabled; otherwise, the solver is not enabled.
Gurobi solver supports options/parameters listed in https://www.gurobi.com/documentation/10.0/refman/parameters.html. On top of these options, we provide the following additional options 1. “GRBwrite”, set to a file name so that Gurobi solver will write the optimization model to this file, check https://www.gurobi.com/documentation/10.0/refman/py_model_write.html for more details, such as all supported file extensions. Set this option to “” if you don’t want to write to file. Default is not to write to a file. 2. “GRBcomputeIIS”, set to 1 to compute an Irreducible Inconsistent Subsystem (IIS) when the problem is infeasible. Refer to https://www.gurobi.com/documentation/10.0/refman/py_model_computeiis.html for more details. Often this method is called together with setting GRBwrite to “FILENAME.ilp” to write IIS to a file with extension “ilp”. Default is not to compute IIS.
GurobiSolver supports parallelization during Solve(). If both the “Threads” integer solver option and CommonSolverOption::kMaxThreads have been set by the user, then the value in “Threads” will be used as the number of threads.
If neither the “Threads” integer solver option nor CommonSolverOption::kMaxThreads has been set by the user, then GurobiSolver uses the environment variable GUROBI_NUM_THREADS (if set) as a default value for “Threads”.
If none of “Threads”, CommonSolverOption::kMaxThreads, or GUROBI_NUM_THREADS are set, then Drake’s default maximum parallelism will be used.
- __init__(self: pydrake.solvers.GurobiSolver) None
- static AcquireLicense() pydrake.solvers.GurobiSolver.License
This acquires a Gurobi license environment shared among all GurobiSolver instances. The environment will stay valid as long as at least one shared_ptr returned by this function is alive. GurobiSolver calls this method on each Solve().
If the license file contains the string
HOSTID
, then we treat this as confirmation that the license is attached to the local host, and maintain an internal copy of the shared_ptr for the lifetime of the process. Otherwise the default behavior is to only hold the license while at least one GurobiSolver instance is alive.Call this method directly and maintain the shared_ptr ONLY if you must use different MathematicalProgram instances at different instances in time, and repeatedly acquiring the license is costly (e.g., requires contacting a license server).
- Returns
A shared pointer to a license environment that will stay valid as long as any shared_ptr returned by this function is alive. If Gurobi is not available in your build, this will return a null (empty) shared_ptr.
- Raises
RuntimeError if Gurobi is available but a license cannot be –
obtained. –
- AddMipNodeCallback(self: pydrake.solvers.GurobiSolver, callback: Callable[[pydrake.solvers.MathematicalProgram, pydrake.solvers.GurobiSolver.SolveStatusInfo, numpy.ndarray[numpy.float64[m, 1]], numpy.ndarray[object[m, 1]]], None]) None
Registers a callback to be called at intermediate solutions during the solve.
- Parameter
callback
: User callback function.
- Parameter
user_data
: Arbitrary data that will be passed to the user callback function.
- Parameter
- AddMipSolCallback(self: pydrake.solvers.GurobiSolver, callback: Callable[[pydrake.solvers.MathematicalProgram, pydrake.solvers.GurobiSolver.SolveStatusInfo], None]) None
Registers a callback to be called at feasible solutions during the solve.
- Parameter
callback
: User callback function.
- Parameter
usrdata
: Arbitrary data that will be passed to the user callback function.
- Parameter
- static id() pydrake.solvers.SolverId
- class License
Context-manageable license from
GurobiSolver.AcquireLicense()
.- __init__(*args, **kwargs)
- is_valid(self: pydrake.solvers.GurobiSolver.License) bool
Indicates that this has a valid license that has not been released.
- class SolveStatusInfo
Contains info returned to a user function that handles a Node or Solution callback.
See also
MipNodeCallbackFunction
See also
MipSolCallbackFunction
- __init__(*args, **kwargs)
- property best_bound
Best known objective lower bound.
- property best_objective
Objective of best solution yet.
- property current_objective
Objective of current solution.
- property explored_node_count
Number of nodes explored so far.
- property feasible_solutions_count
Number of feasible sols found so far.
- property reported_runtime
Runtime as of this callback.
- class pydrake.solvers.GurobiSolverDetails
The Gurobi solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<GurobiSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property error_code
The error message returned from Gurobi call. Please refer to https://www.gurobi.com/documentation/10.0/refman/error_codes.html
- property objective_bound
The best known bound on the optimal objective. This is used in mixed integer optimization. Please refer to https://www.gurobi.com/documentation/10.0/refman/objbound.html
- property optimization_status
The status code when the optimize call has returned. Please refer to https://www.gurobi.com/documentation/10.0/refman/optimization_status_codes.html
- property optimizer_time
The gurobi optimization time. Please refer to https://www.gurobi.com/documentation/10.0/refman/runtime.html
- class pydrake.solvers.IntervalBinning
For a continuous variable whose range is cut into small intervals, we will use binary variables to represent which interval the continuous variable is in. We support two representations, either using logarithmic number of binary variables, or linear number of binary variables. For more details,
See also
AddLogarithmicSos2Constraint and AddSos2Constraint
Members:
kLogarithmic
kLinear
- __init__(self: pydrake.solvers.IntervalBinning, value: int) None
- kLinear = <IntervalBinning.kLinear: 1>
- kLogarithmic = <IntervalBinning.kLogarithmic: 0>
- property name
- property value
- class pydrake.solvers.IpoptSolver
Bases:
pydrake.solvers.SolverInterface
A wrapper to call <a href=”https://coin-or.github.io/Ipopt/”>Ipopt</a> using Drake’s MathematicalProgram.
The IpoptSolver is NOT threadsafe to call in parallel. This is due to Ipopt’s reliance on the MUMPs linear solver which is not safe to call concurrently (see https://github.com/coin-or/Ipopt/issues/733). This can be resolved by enabling the SPRAL solver (see Drake issue https://github.com/RobotLocomotion/drake/issues/21476).
- __init__(self: pydrake.solvers.IpoptSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.IpoptSolverDetails
The Ipopt solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<IpoptSolver>() to obtain the details.
- __init__(*args, **kwargs)
- ConvertStatusToString(self: pydrake.solvers.IpoptSolverDetails) str
Convert status field to string. This function is useful if you want to interpret the meaning of status.
- property g
The final value for the constraint function.
- property lambda_val
The final value for the constraint multiplier.
- property status
The final status of the solver. Please refer to section 6 in Introduction to Ipopt: A tutorial for downloading, installing, and using Ipopt. You could also find the meaning of the status as Ipopt::SolverReturn defined in IpAlgTypes.hpp
- property z_L
The final value for the lower bound multiplier.
- property z_U
The final value for the upper bound multiplier.
- class pydrake.solvers.L1NormCost
Bases:
pydrake.solvers.Cost
Implements a cost of the form ‖Ax + b‖₁. Note that this cost is non-differentiable when any element of Ax + b equals zero.
- __init__(self: pydrake.solvers.L1NormCost, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]]) None
Construct a cost of the form ‖Ax + b‖₁.
- Parameter
A
: Linear term.
- Parameter
b
: Constant term.
- Raises
RuntimeError if the size of A and b don't match. –
- Parameter
- A(self: pydrake.solvers.L1NormCost) numpy.ndarray[numpy.float64[m, n]]
- b(self: pydrake.solvers.L1NormCost) numpy.ndarray[numpy.float64[m, 1]]
- UpdateCoefficients(self: pydrake.solvers.L1NormCost, new_A: numpy.ndarray[numpy.float64[m, n]], new_b: numpy.ndarray[numpy.float64[m, 1]] = 0) None
Updates the coefficients of the cost. Note that the number of variables (columns of A) cannot change.
- Parameter
new_A
: New linear term.
- Parameter
new_b
: New constant term.
- Parameter
- class pydrake.solvers.L2NormCost
Bases:
pydrake.solvers.Cost
Implements a cost of the form ‖Ax + b‖₂.
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: pydrake.solvers.L2NormCost, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]]) -> None
Construct a cost of the form ‖Ax + b‖₂.
- Parameter
A
: Linear term.
- Parameter
b
: Constant term.
- Raises
RuntimeError if the size of A and b don't match. –
__init__(self: pydrake.solvers.L2NormCost, A: scipy.sparse.csc_matrix[numpy.float64], b: numpy.ndarray[numpy.float64[m, 1]]) -> None
Overloads constructor with a sparse A matrix.
- b(self: pydrake.solvers.L2NormCost) numpy.ndarray[numpy.float64[m, 1]]
- get_sparse_A(self: pydrake.solvers.L2NormCost) scipy.sparse.csc_matrix[numpy.float64]
- GetDenseA(self: pydrake.solvers.L2NormCost) numpy.ndarray[numpy.float64[m, n]]
- UpdateCoefficients(*args, **kwargs)
Overloaded function.
UpdateCoefficients(self: pydrake.solvers.L2NormCost, new_A: numpy.ndarray[numpy.float64[m, n]], new_b: numpy.ndarray[numpy.float64[m, 1]] = 0) -> None
Updates the coefficients of the cost. Note that the number of variables (columns of A) cannot change.
- Parameter
new_A
: New linear term.
- Parameter
new_b
: New constant term.
UpdateCoefficients(self: pydrake.solvers.L2NormCost, new_A: scipy.sparse.csc_matrix[numpy.float64], new_b: numpy.ndarray[numpy.float64[m, 1]] = 0) -> None
Overloads UpdateCoefficients but with a sparse A matrix.
- class pydrake.solvers.LinearComplementarityConstraint
Bases:
pydrake.solvers.Constraint
Implements a constraint of the form:
Click to expand C++ code...
Mx + q ≥ 0 x ≥ 0 x'(Mx + q) == 0
Often this is summarized with the short-hand:
Click to expand C++ code...
0 ≤ z ⊥ Mz+q ≥ 0
An implied slack variable complements any 0 component of x. To get the slack values at a given solution x, use Eval(x).
- __init__(*args, **kwargs)
- M(self: pydrake.solvers.LinearComplementarityConstraint) numpy.ndarray[numpy.float64[m, n]]
- q(self: pydrake.solvers.LinearComplementarityConstraint) numpy.ndarray[numpy.float64[m, 1]]
- class pydrake.solvers.LinearConstraint
Bases:
pydrake.solvers.Constraint
Implements a constraint of the form \(lb <= Ax <= ub\)
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: pydrake.solvers.LinearConstraint, A: numpy.ndarray[numpy.float64[m, n]], lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]]) -> None
Construct the linear constraint lb <= A*x <= ub
Throws if A has any entry which is not finite.
__init__(self: pydrake.solvers.LinearConstraint, A: scipy.sparse.csc_matrix[numpy.float64], lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]]) -> None
Overloads constructor with a sparse A matrix. Throws if A has any entry which is not finite.
- get_sparse_A(self: pydrake.solvers.LinearConstraint) scipy.sparse.csc_matrix[numpy.float64]
- GetDenseA(self: pydrake.solvers.LinearConstraint) numpy.ndarray[numpy.float64[m, n]]
Get the matrix A as a dense matrix.
Note
this might involve memory allocation to convert a sparse matrix to a dense one, for better performance you should call get_sparse_A() which returns a sparse matrix.
- is_dense_A_constructed(self: pydrake.solvers.LinearConstraint) bool
Returns true iff this constraint already has a dense representation, i.e, if GetDenseA() will be cheap.
- RemoveTinyCoefficient(self: pydrake.solvers.LinearConstraint, tol: float) None
Sets A(i, j) to zero if abs(A(i, j)) <= tol. Oftentimes the coefficient A is computed numerically with round-off errors. Such small round-off errors can cause numerical issues for certain optimization solvers. Hence it is recommended to remove the tiny coefficients to achieve numerical robustness.
- Parameter
tol
: The entries in A with absolute value <= tol will be set to 0.
Note
tol>= 0.
- Parameter
- set_bounds(self: pydrake.solvers.LinearConstraint, new_lb: numpy.ndarray[numpy.float64[m, 1]], new_ub: numpy.ndarray[numpy.float64[m, 1]]) None
Set the upper and lower bounds of the constraint.
- Parameter
new_lb
: . A
num_constraints
x 1 vector.- Parameter
new_ub
: . A
num_constraints
x 1 vector.
Note
If the users want to expose this method in a sub-class, do using Constraint::set_bounds, as in LinearConstraint.
- Parameter
- UpdateCoefficients(*args, **kwargs)
Overloaded function.
UpdateCoefficients(self: pydrake.solvers.LinearConstraint, new_A: numpy.ndarray[numpy.float64[m, n]], new_lb: numpy.ndarray[numpy.float64[m, 1]], new_ub: numpy.ndarray[numpy.float64[m, 1]]) -> None
Updates the linear term, upper and lower bounds in the linear constraint. The updated constraint is: new_lb <= new_A * x <= new_ub Note that the size of constraints (number of rows) can change, but the number of variables (number of cols) cannot.
Throws if new_A has any entry which is not finite or if new_A, new_lb, and new_ub don’t all have the same number of rows.
- Parameter
new_A
: new linear term
- Parameter
new_lb
: new lower bound
- Parameter
new_up
: new upper bound
UpdateCoefficients(self: pydrake.solvers.LinearConstraint, new_A: scipy.sparse.csc_matrix[numpy.float64], new_lb: numpy.ndarray[numpy.float64[m, 1]], new_ub: numpy.ndarray[numpy.float64[m, 1]]) -> None
Overloads UpdateCoefficients but with a sparse A matrix.
Throws if new_A has any entry which is not finite or if new_A, new_lb, and new_ub don’t all have the same number of rows.
- UpdateLowerBound(self: pydrake.solvers.LinearConstraint, new_lb: numpy.ndarray[numpy.float64[m, 1]]) None
Updates the lower bound.
Note
if the users want to expose this method in a sub-class, do using Constraint::UpdateLowerBound, as in LinearConstraint.
- UpdateUpperBound(self: pydrake.solvers.LinearConstraint, new_ub: numpy.ndarray[numpy.float64[m, 1]]) None
Updates the upper bound.
Note
if the users want to expose this method in a sub-class, do using Constraint::UpdateUpperBound, as in LinearConstraint.
- class pydrake.solvers.LinearCost
Bases:
pydrake.solvers.Cost
Implements a cost of the form
\[a'x + b\].
- __init__(self: pydrake.solvers.LinearCost, a: numpy.ndarray[numpy.float64[m, 1]], b: float) None
Construct a linear cost of the form
\[a'x + b\].
- Parameter
a
: Linear term.
- Parameter
b
: (optional) Constant term.
- Parameter
- a(self: pydrake.solvers.LinearCost) numpy.ndarray[numpy.float64[m, 1]]
- b(self: pydrake.solvers.LinearCost) float
- UpdateCoefficients(self: pydrake.solvers.LinearCost, new_a: numpy.ndarray[numpy.float64[m, 1]], new_b: float = 0) None
Updates the coefficients of the cost. Note that the number of variables (size of a) cannot change.
- Parameter
new_a
: New linear term.
- Parameter
new_b
: (optional) New constant term.
- Parameter
- class pydrake.solvers.LinearEqualityConstraint
Bases:
pydrake.solvers.LinearConstraint
Implements a constraint of the form \(Ax = b\)
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: pydrake.solvers.LinearEqualityConstraint, Aeq: numpy.ndarray[numpy.float64[m, n]], beq: numpy.ndarray[numpy.float64[m, 1]]) -> None
Constructs the linear equality constraint Aeq * x = beq.
Throws is any entry in Aeq or beq is not finite.
__init__(self: pydrake.solvers.LinearEqualityConstraint, Aeq: scipy.sparse.csc_matrix[numpy.float64], beq: numpy.ndarray[numpy.float64[m, 1]]) -> None
Overloads the constructor with a sparse matrix Aeq.
__init__(self: pydrake.solvers.LinearEqualityConstraint, a: numpy.ndarray[numpy.float64[1, n]], beq: float) -> None
Constructs the linear equality constraint a.dot(x) = beq
- UpdateCoefficients(self: pydrake.solvers.LinearEqualityConstraint, Aeq: numpy.ndarray[numpy.float64[m, n]], beq: numpy.ndarray[numpy.float64[m, 1]]) None
Overloads UpdateCoefficients but with a sparse A matrix.
Throws if any entry of beq or Aeq is not finite.
- class pydrake.solvers.LinearMatrixInequalityConstraint
Bases:
pydrake.solvers.Constraint
Impose the matrix inequality constraint on variable x
\[F_0 + x_1 F_1 + ... + x_n F_n \text{ is p.s.d}\]where p.s.d stands for positive semidefinite. \(F_0, F_1, ..., F_n\) are all given symmetric matrices of the same size.
Note
if the matrices Fᵢ all have 1 row, then it is better to impose a linear inequality constraints; if they all have 2 rows, then it is better to impose a rotated Lorentz cone constraint, since a 2 x 2 matrix X being p.s.d is equivalent to the constraint [X(0, 0), X(1, 1), X(0, 1)] in the rotated Lorentz cone.
- __init__(self: pydrake.solvers.LinearMatrixInequalityConstraint, F: list[numpy.ndarray[numpy.float64[m, n]]], symmetry_tolerance: float = 1e-10) None
- Parameter
F
: Each symmetric matrix F[i] should be of the same size.
- Parameter
symmetry_tolerance
: The precision to determine if the input matrices Fi are all symmetric.
See also
math::IsSymmetric().
- Parameter
- F(self: pydrake.solvers.LinearMatrixInequalityConstraint) list[numpy.ndarray[numpy.float64[m, n]]]
- matrix_rows(self: pydrake.solvers.LinearMatrixInequalityConstraint) int
Gets the number of rows in the matrix inequality constraint. Namely Fi are all matrix_rows() x matrix_rows() matrices.
- class pydrake.solvers.LInfNormCost
Bases:
pydrake.solvers.Cost
Implements a cost of the form ‖Ax + b‖∞. Note that this cost is non-differentiable when any two or more elements of Ax + b are equal.
- __init__(self: pydrake.solvers.LInfNormCost, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]]) None
Construct a cost of the form ‖Ax + b‖∞.
- Parameter
A
: Linear term.
- Parameter
b
: Constant term.
- Raises
RuntimeError if the size of A and b don't match. –
- Parameter
- A(self: pydrake.solvers.LInfNormCost) numpy.ndarray[numpy.float64[m, n]]
- b(self: pydrake.solvers.LInfNormCost) numpy.ndarray[numpy.float64[m, 1]]
- UpdateCoefficients(self: pydrake.solvers.LInfNormCost, new_A: numpy.ndarray[numpy.float64[m, n]], new_b: numpy.ndarray[numpy.float64[m, 1]] = 0) None
Updates the coefficients of the cost. Note that the number of variables (columns of A) cannot change.
- Parameter
new_A
: New linear term.
- Parameter
new_b
: New constant term.
- Parameter
- class pydrake.solvers.LorentzConeConstraint
Bases:
pydrake.solvers.Constraint
Constraining the linear expression \(z=Ax+b\) lies within the Lorentz cone. A vector z ∈ ℝ ⁿ lies within Lorentz cone if
\[z_0 \ge \sqrt{z_1^2+...+z_{n-1}^2}\]where A ∈ ℝ ⁿˣᵐ, b ∈ ℝ ⁿ are given matrices. Ideally this constraint should be handled by a second-order cone solver. In case the user wants to enforce this constraint through general nonlinear optimization, we provide three different formulations on the Lorentz cone constraint 1. [kConvex] g(z) = z₀ - sqrt(z₁² + … + zₙ₋₁²) ≥ 0 This formulation is not differentiable at z₁=…=zₙ₋₁=0 2. [kConvexSmooth] g(z) = z₀ - sqrt(z₁² + … + zₙ₋₁²) ≥ 0 but the gradient of g(z) is approximated as ∂g(z)/∂z = [1, -z₁/sqrt(z₁² + … zₙ₋₁² + ε), …, -zₙ₋₁/sqrt(z₁²+…+zₙ₋₁²+ε)] where ε is a small positive number. 3. [kNonconvex] z₀²-(z₁²+…+zₙ₋₁²) ≥ 0 z₀ ≥ 0 This constraint is differentiable everywhere, but z₀²-(z₁²+…+zₙ₋₁²) ≥ 0 is non-convex. For more information and visualization, please refer to https://www.epfl.ch/labs/disopt/wp-content/uploads/2018/09/7.pdf and https://docs.mosek.com/modeling-cookbook/cqo.html (Fig 3.1)
- __init__(self: pydrake.solvers.LorentzConeConstraint, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]], eval_type: pydrake.solvers.LorentzConeConstraint.EvalType = <EvalType.kConvexSmooth: 1>) None
- Raises
RuntimeError if A.row() < 2. –
- A(self: pydrake.solvers.LorentzConeConstraint) scipy.sparse.csc_matrix[numpy.float64]
Getter for A.
- b(self: pydrake.solvers.LorentzConeConstraint) numpy.ndarray[numpy.float64[m, 1]]
Getter for b.
- eval_type(self: pydrake.solvers.LorentzConeConstraint) pydrake.solvers.LorentzConeConstraint.EvalType
Getter for eval type.
- class EvalType
We provide three possible Eval functions to represent the Lorentz cone constraint z₀ ≥ sqrt(z₁² + … + zₙ₋₁²). For more explanation on the three formulations, refer to LorentzConeConstraint documentation.
Members:
kConvex : The constraint is g(z) = z₀ - sqrt(z₁² + … + zₙ₋₁²) ≥ 0. Note this
formulation is non-differentiable at z₁= …= zₙ₋₁=0
kConvexSmooth : Same as kConvex, but with approximated gradient that exists
everywhere..
kNonconvex : Nonconvex constraint z₀²-(z₁²+…+zₙ₋₁²) ≥ 0 and z₀ ≥ 0. Note this
formulation is differentiable, but at z₁= …= zₙ₋₁=0 the gradient is also 0, so a gradient-based nonlinear solver can get stuck.
- __init__(self: pydrake.solvers.LorentzConeConstraint.EvalType, value: int) None
- kConvex = <EvalType.kConvex: 0>
- kConvexSmooth = <EvalType.kConvexSmooth: 1>
- kNonconvex = <EvalType.kNonconvex: 2>
- property name
- property value
- UpdateCoefficients(self: pydrake.solvers.LorentzConeConstraint, new_A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], new_b: numpy.ndarray[numpy.float64[m, 1]]) None
Updates the coefficients, the updated constraint is z=new_A * x + new_b in the Lorentz cone.
- Raises
RuntimeError if the new_A.cols() != A.cols(), namely the variable –
size should not change. –
- Precondition:
new_A
has to have at least 2 rows and new_A.rows() == new_b.rows().
- pydrake.solvers.MakeFirstAvailableSolver(solver_ids: list[pydrake.solvers.SolverId]) pydrake.solvers.SolverInterface
Makes the first available and enabled solver. If no solvers are available, throws a RuntimeError.
- pydrake.solvers.MakeSemidefiniteRelaxation(*args, **kwargs)
Overloaded function.
MakeSemidefiniteRelaxation(prog: pydrake.solvers.MathematicalProgram, options: pydrake.solvers.SemidefiniteRelaxationOptions = SemidefiniteRelaxationOptions(add_implied_linear_equality_constraints=True, add_implied_linear_constraints=True, preserve_convex_quadratic_constraints=False)) -> pydrake.solvers.MathematicalProgram
Constructs a new MathematicalProgram which represents the semidefinite programming convex relaxation of the (likely nonconvex) program
prog
. This method currently supports only linear and quadratic costs and constraints, but may be extended in the future with broader support.See https://underactuated-mit-edu.ezproxy.canberra.edu.au/optimization.html#sdp_relaxation for references and examples.
Note: Currently, programs using LinearEqualityConstraint will give tighter relaxations than programs using LinearConstraint or BoundingBoxConstraint, even if lower_bound == upper_bound. Prefer LinearEqualityConstraint.
- Raises
RuntimeError if prog has costs and constraints which are not –
linear nor quadratic. –
MakeSemidefiniteRelaxation(prog: pydrake.solvers.MathematicalProgram, variable_groups: list[pydrake.symbolic.Variables], options: pydrake.solvers.SemidefiniteRelaxationOptions = SemidefiniteRelaxationOptions(add_implied_linear_equality_constraints=True, add_implied_linear_constraints=True, preserve_convex_quadratic_constraints=False)) -> pydrake.solvers.MathematicalProgram
A version of MakeSemidefiniteRelaxation that allows for specifying the sparsity of the relaxation.
For each group in
variable_groups
, the costs and constraints whose variables are a subset of the group will be jointly relaxed into a single, dense semidefinite program in the same manner as MakeSemidefiniteRelaxation(prog).Each of these semidefinite relaxations are aggregated into a single program, and their semidefinite variables are made to agree where the variable groups overlap.
The returned program will always have the same number of PSD variables as variable groups.
Costs and constraints whose variables are not a subset of any of the groups are not relaxed and are simply added to the aggregated program. If these costs and constraints are non-convex, then this method will throw.
As an example, consider the following program. min x₂ᵀ * Q * x₂ subject to x₁ + x₂ ≤ 1 x₂ + x₃ ≤ 2 x₁ + x₃ ≤ 3
And suppose we call MakeSemidefiniteRelaxation(prog, std::vector<Variables>{{x₁, x₂}, {x₂,x₃}}).
The resulting relaxation would have two semidefinite variables, namely: [U₁, U₂, x₁] [W₁, W₂, x₂] [U₂, U₃, x₂], [W₂, W₃, x₃] [x₁ᵀ, x₂ᵀ, 1] [x₂ᵀ, x₃ᵀ, 1]
The first semidefinite variable would be associated to the semidefinite relaxation of the subprogram: min x₁ᵀ * Q * x₁ subject to x₁ + x₂ ≤ 1 And the implied constraints from x₁ + x₂ ≤ 1 would be added to the first semidefinite variable. These implied constraints are additional constraints that can be placed on the matrix [U₁, U₂, x₁] [U₂, U₃, x₂] [x₁ᵀ, x₂ᵀ, 1] which are redundant in the non-convex program, but are not redundant in the semidefinite relaxation. See https://underactuated-mit-edu.ezproxy.canberra.edu.au/optimization.html#sdp_relaxation for references and examples.
The second semidefinite variable would be associated to the semidefinite relaxation of the subprogram: min x₂ᵀ * Q * x₂ subject to x₂ + x₃ ≤ 2 And the implied constraints from x₂ + x₃ ≤ 2 would be added to the second semidefinite variable.
Since the constraint x₁ + x₃ ≤ 3 is not a subset of any of the variable groups, it will be added to the overall relaxation, but will not be used to generate implied constraints on any semidefinite variable.
The total relaxation would also include an equality constraint that U₃ == W₁ so that the quadratic relaxation of x₂ is consistent between the two semidefinite variables.
Note: 1) Costs are only associated to a single variable group, so that the resulting aggregated program has a relaxed cost with the same scaling. 2) The homogenization variable “1” is re-used in every semidefinite variable.
- Raises
RuntimeError if there is a non-convex cost or constraint whose –
variables do not intersect with any of the variable groups. –
- pydrake.solvers.MakeSolver(id: pydrake.solvers.SolverId) pydrake.solvers.SolverInterface
Given the solver ID, create the solver with the matching ID.
- Raises
RuntimeError if there is no matching solver. –
- class pydrake.solvers.MathematicalProgram
MathematicalProgram stores the decision variables, the constraints and costs of an optimization problem. The user can solve the problem by calling solvers::Solve() function, and obtain the results of the optimization.
- __init__(self: pydrake.solvers.MathematicalProgram) None
- Add2NormSquaredCost(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) pydrake.solvers.Binding[QuadraticCost]
Adds a quadratic cost of the form |Ax-b|²=(Ax-b)ᵀ(Ax-b)
- AddBoundingBoxConstraint(*args, **kwargs)
Overloaded function.
AddBoundingBoxConstraint(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], arg1: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], arg2: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[BoundingBoxConstraint]
Adds bounding box constraints referencing potentially a subset of the decision variables.
- Parameter
lb
: The lower bound.
- Parameter
ub
: The upper bound.
- Parameter
vars
: Will imposes constraint lb(i, j) <= vars(i, j) <= ub(i, j).
- Returns
The newly constructed BoundingBoxConstraint.
AddBoundingBoxConstraint(self: pydrake.solvers.MathematicalProgram, arg0: float, arg1: float, arg2: pydrake.symbolic.Variable) -> pydrake.solvers.Binding[BoundingBoxConstraint]
Adds bounds for a single variable.
- Parameter
lb
: Lower bound.
- Parameter
ub
: Upper bound.
- Parameter
var
: The decision variable.
AddBoundingBoxConstraint(self: pydrake.solvers.MathematicalProgram, arg0: float, arg1: float, arg2: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[BoundingBoxConstraint]
Adds the same scalar lower and upper bound to every variable in
vars
.- Template parameter
Derived
: An Eigen::Matrix with Variable as the scalar type. The matrix has unknown number of columns at compile time, or has more than one column.
- Parameter
lb
: Lower bound.
- Parameter
ub
: Upper bound.
- Parameter
vars
: The decision variables.
- AddConstraint(*args, **kwargs)
Overloaded function.
AddConstraint(self: pydrake.solvers.MathematicalProgram, func: Callable, lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]], description: str = ‘’) -> pydrake.solvers.Binding[Constraint]
Adds a constraint using a Python function.
AddConstraint(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.symbolic.Expression, arg1: float, arg2: float) -> pydrake.solvers.Binding[Constraint]
Adds one row of constraint lb <= e <= ub where
e
is a symbolic expression.- Raises
RuntimeError if 1. lb <= e <= ub is a trivial constraint such –
as 1 <= 2 <= 3. 2. lb <= e <= ub is unsatisfiable such as 1 <= –
-5 <= 3 –
- Parameter
e
: A symbolic expression of the decision variables.
- Parameter
lb
: A scalar, the lower bound.
- Parameter
ub
: A scalar, the upper bound.
The resulting constraint may be a BoundingBoxConstraint, LinearConstraint, LinearEqualityConstraint, QuadraticConstraint, or ExpressionConstraint, depending on the arguments. Constraints of the form x == 1 (which could be created as a BoundingBoxConstraint or LinearEqualityConstraint) will be constructed as a LinearEqualityConstraint.
AddConstraint(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.symbolic.Formula) -> pydrake.solvers.Binding[Constraint]
Add a constraint represented by a symbolic formula to the program. The input formula
f
can be of the following forms:e1 <= e2
e1 >= e2
e1 == e2
A conjunction of relational formulas where each conjunct is a relational formula matched by 1, 2, or 3.
Note that first two cases might return an object of Binding<BoundingBoxConstraint>, Binding<LinearConstraint>, or Binding<ExpressionConstraint>, depending on
f
. Also the third case might return an object of Binding<LinearEqualityConstraint> or Binding<ExpressionConstraint>.It throws an exception if 1.
f
is not matched with one of the above patterns. Especially, strict inequalities (<, >) are not allowed. 2.f
is either a trivial constraint such as “1 <= 2” or an unsatisfiable constraint such as “2 <= 1”. 3. It is not possible to find numerical bounds ofe1
ande2
wheref
= e1 ≃ e2. We allowe1
ande2
to be infinite but only if there are no other terms. For example,x <= ∞
is allowed. However,x - ∞ <= 0
is not allowed becausex ↦ ∞
introducesnan
in the evaluation.AddConstraint(self: pydrake.solvers.MathematicalProgram, constraint: pydrake.solvers.Constraint, vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[Constraint]
Adds a generic constraint to the program. This should only be used if a more specific type of constraint is not available, as it may require the use of a significantly more expensive solver.
AddConstraint(self: pydrake.solvers.MathematicalProgram, formulas: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[Constraint]
Adds a constraint represented by an Eigen::Matrix<symbolic::Formula> or Eigen::Array<symbolic::Formula> to the program. A common use-case of this function is to add a constraint with the element-wise comparison between two Eigen matrices, using
A.array() <= B.array()
. See the following example.Click to expand C++ code...
MathematicalProgram prog; Eigen::Matrix<double, 2, 2> A = ...; Eigen::Vector2d b = ...; auto x = prog.NewContinuousVariables(2, "x"); prog.AddConstraint((A * x).array() <= b.array());
A formula in
formulas
can be of the following forms:e1 <= e2
e1 >= e2
e1 == e2
It throws an exception if AddConstraint(const symbolic::Formula& f) throws an exception for f ∈
formulas
.@overload Binding<Constraint> AddConstraint(const symbolic::Formula& f)
- Template parameter
Derived
: Eigen::Matrix or Eigen::Array with Formula as the Scalar.
AddConstraint(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[Constraint]) -> pydrake.solvers.Binding[Constraint]
Adds a generic constraint to the program. This should only be used if a more specific type of constraint is not available, as it may require the use of a significantly more expensive solver.
Note
If
binding
.evaluator()->num_constraints() == 0, then this constraint is not added into the MathematicalProgram. We returnbinding
directly.
- AddCost(*args, **kwargs)
Overloaded function.
AddCost(self: pydrake.solvers.MathematicalProgram, func: Callable, vars: numpy.ndarray[object[m, 1]], description: str = ‘’) -> pydrake.solvers.Binding[Cost]
Adds a cost function.
AddCost(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression) -> pydrake.solvers.Binding[Cost]
Adds a cost in the symbolic form.
- Returns
The newly created cost, together with the bound variables.
AddCost(self: pydrake.solvers.MathematicalProgram, obj: pydrake.solvers.Cost, vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[Cost]
Adds a cost type to the optimization program.
- Parameter
obj
: The added objective.
- Parameter
vars
: The decision variables on which the cost depend.
- AddDecisionVariables(self: pydrake.solvers.MathematicalProgram, decision_variables: numpy.ndarray[object[m, n], flags.f_contiguous]) None
Appends new variables to the end of the existing variables.
- Parameter
decision_variables
: The newly added decision_variables.
- Precondition:
decision_variables
should not intersect with the existing indeterminates in the optimization program.
- Raises
RuntimeError if the preconditions are not satisfied. –
- Parameter
- AddEqualityConstraintBetweenPolynomials(self: pydrake.solvers.MathematicalProgram, p1: pydrake.symbolic.Polynomial, p2: pydrake.symbolic.Polynomial) list[pydrake.solvers.Binding[LinearEqualityConstraint]]
Constraining that two polynomials are the same (i.e., they have the same coefficients for each monomial). This function is often used in sum-of-squares optimization. We will impose the linear equality constraint that the coefficient of a monomial in
p1
is the same as the coefficient of the same monomial inp2
.- Parameter
p1
: Note that p1’s indeterminates should have been registered as indeterminates in this MathematicalProgram object, and p1’s coefficients are affine functions of decision variables in this MathematicalProgram object.
- Parameter
p2
: Note that p2’s indeterminates should have been registered as indeterminates in this MathematicalProgram object, and p2’s coefficients are affine functions of decision variables in this MathematicalProgram object.
Note
It calls
Reparse
to enforcep1
andp2
to have this MathematicalProgram’s indeterminates.- Parameter
- AddExponentialConeConstraint(*args, **kwargs)
Overloaded function.
AddExponentialConeConstraint(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[3, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[3, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[ExponentialConeConstraint]
Adds an exponential cone constraint, that z = A * vars + b should be in the exponential cone. Namely {z₀, z₁, z₂ | z₀ ≥ z₁ * exp(z₂ / z₁), z₁ > 0}.
- Parameter
A
: The A matrix in the documentation above. A must have 3 rows.
- Parameter
b
: The b vector in the documentation above.
- Parameter
vars
: The variables bound with this constraint.
AddExponentialConeConstraint(self: pydrake.solvers.MathematicalProgram, z: numpy.ndarray[object[3, 1]]) -> pydrake.solvers.Binding[ExponentialConeConstraint]
Add the constraint that z is in the exponential cone.
- Parameter
z
: The expression in the exponential cone.
- Precondition:
each entry in
z
is a linear expression of the decision variables.
- AddIndeterminate(self: pydrake.solvers.MathematicalProgram, new_indeterminate: pydrake.symbolic.Variable) int
Adds indeterminate. This method appends an indeterminate to the end of the program’s old indeterminates, if
new_indeterminate
is not already in the program’s old indeterminates.- Parameter
new_indeterminate
: The indeterminate to be appended to the program’s old indeterminates.
- Returns
indeterminate_index The index of the added indeterminate in the program’s indeterminates. i.e. prog.indeterminates()(indeterminate_index) = new_indeterminate.
- Precondition:
new_indeterminate
should not intersect with the program’s decision variables.- Precondition:
new_indeterminate should be of CONTINUOUS type.
- Parameter
- AddIndeterminates(*args, **kwargs)
Overloaded function.
AddIndeterminates(self: pydrake.solvers.MathematicalProgram, new_indeterminates: numpy.ndarray[object[m, n], flags.f_contiguous]) -> None
Adds indeterminates. This method appends some indeterminates to the end of the program’s old indeterminates.
- Parameter
new_indeterminates
: The indeterminates to be appended to the program’s old indeterminates.
- Precondition:
new_indeterminates
should not intersect with the program’s old decision variables.- Precondition:
Each entry in new_indeterminates should be of CONTINUOUS type.
AddIndeterminates(self: pydrake.solvers.MathematicalProgram, new_indeterminates: pydrake.symbolic.Variables) -> None
Adds indeterminates. This method appends some indeterminates to the end of the program’s old indeterminates.
- Parameter
new_indeterminates
: The indeterminates to be appended to the program’s old indeterminates.
- Precondition:
new_indeterminates
should not intersect with the program’s old decision variables.- Precondition:
Each entry in new_indeterminates should be of CONTINUOUS type.
- AddL2NormCost(*args, **kwargs)
Overloaded function.
AddL2NormCost(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[L2NormCost]
Adds an L2 norm cost |Ax+b|₂ (notice this cost is not quadratic since we don’t take the square of the L2 norm).
Note
Currently kL2NormCost is supported by SnoptSolver, IpoptSolver, GurobiSolver, MosekSolver, ClarabelSolver, and SCSSolver.
AddL2NormCost(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, psd_tol: float = 1e-08, coefficient_tol: float = 1e-08) -> pydrake.solvers.Binding[L2NormCost]
Adds an L2 norm cost |Ax+b|₂ from a symbolic expression which can be decomposed into sqrt((Ax+b)’(Ax+b)). See symbolic::DecomposeL2NormExpression for details on the tolerance parameters.
- Raises
RuntimeError if e cannot be decomposed into an L2 norm. –
- AddL2NormCostUsingConicConstraint(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) tuple[pydrake.symbolic.Variable, pydrake.solvers.Binding[LinearCost], pydrake.solvers.Binding[LorentzConeConstraint]]
Adds an L2 norm cost min |Ax+b|₂ as a linear cost min s on the slack variable s, together with a Lorentz cone constraint s ≥ |Ax+b|₂ Many conic optimization solvers (Gurobi, MOSEK™, SCS, etc) natively prefers this form of linear cost + conic constraints. So if you are going to use one of these conic solvers, then add the L2 norm cost using this function instead of AddL2NormCost().
- Returns
(s, linear_cost, lorentz_cone_constraint).
s
is the slack variable (with variable name string as “slack”),linear_cost
is the cost ons
, andlorentz_cone_constraint
is the constraint s≥|Ax+b|₂
- AddLinearComplementarityConstraint(self: pydrake.solvers.MathematicalProgram, M: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], q: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) pydrake.solvers.Binding[LinearComplementarityConstraint]
Adds a linear complementarity constraints referencing a subset of the decision variables.
- AddLinearConstraint(*args, **kwargs)
Overloaded function.
AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearConstraint]
Adds linear constraints referencing potentially a subset of the decision variables (defined in the vars parameter).
AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, a: numpy.ndarray[numpy.float64[1, n]], lb: float, ub: float, vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearConstraint]
Adds one row of linear constraint referencing potentially a subset of the decision variables (defined in the vars parameter). lb <= a*vars <= ub
- Parameter
a
: A row vector.
- Parameter
lb
: A scalar, the lower bound.
- Parameter
ub
: A scalar, the upper bound.
- Parameter
vars
: The decision variables on which to impose the linear constraint.
AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, A: scipy.sparse.csc_matrix[numpy.float64], lb: numpy.ndarray[numpy.float64[m, 1]], ub: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearConstraint]
Adds sparse linear constraints referencing potentially a subset of the decision variables (defined in the vars parameter).
AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, lb: float, ub: float) -> pydrake.solvers.Binding[LinearConstraint]
Adds one row of linear constraint lb <= e <= ub where
e
is a symbolic expression.- Raises
RuntimeError if 1. e is a non-linear expression. 2. ``lb <= e –
<= ub`` is a trivial constraint such as 1 <= 2 <= 3. 3. ``lb <= e –
<= ub`` is unsatisfiable such as 1 <= -5 <= 3 –
- Parameter
e
: A linear symbolic expression in the form of
c0 + c1 * v1 + ... + cn * vn
wherec_i
is a constant and @v_i is a variable.- Parameter
lb
: A scalar, the lower bound.
- Parameter
ub
: A scalar, the upper bound.
AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, v: numpy.ndarray[object[m, n], flags.f_contiguous], lb: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], ub: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[LinearConstraint]
Adds linear constraints represented by symbolic expressions to the program. It throws if @v includes a non-linear expression or
lb <= v <= ub
includes trivial/unsatisfiable constraints.AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, f: pydrake.symbolic.Formula) -> pydrake.solvers.Binding[LinearConstraint]
Add a linear constraint represented by a symbolic formula to the program. The input formula
f
can be of the following forms:e1 <= e2
e1 >= e2
e1 == e2
A conjunction of relational formulas where each conjunct is a relational formula matched by 1, 2, or 3.
Note that first two cases might return an object of Binding<BoundingBoxConstraint> depending on
f
. Also the third case returns an object of Binding<LinearEqualityConstraint>.It throws an exception if 1.
f
is not matched with one of the above patterns. Especially, strict inequalities (<, >) are not allowed. 2.f
includes a non-linear expression. 3.f
is either a trivial constraint such as “1 <= 2” or an unsatisfiable constraint such as “2 <= 1”. 4. It is not possible to find numerical bounds ofe1
ande2
wheref
= e1 ≃ e2. We allowe1
ande2
to be infinite but only if there are no other terms. For example,x <= ∞
is allowed. However,x - ∞ <= 0
is not allowed becausex ↦ ∞
introducesnan
in the evaluation.AddLinearConstraint(self: pydrake.solvers.MathematicalProgram, formulas: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[LinearConstraint]
Add a linear constraint represented by an Eigen::Array<symbolic::Formula> to the program. A common use-case of this function is to add a linear constraint with the element-wise comparison between two Eigen matrices, using
A.array() <= B.array()
. See the following example.Click to expand C++ code...
MathematicalProgram prog; Eigen::Matrix<double, 2, 2> A; auto x = prog.NewContinuousVariables(2, "x"); Eigen::Vector2d b; ... // set up A and b prog.AddLinearConstraint((A * x).array() <= b.array());
A formula in
formulas
can be of the following forms:e1 <= e2 2. e1 >= e2 3. e1 == e2
It throws an exception if AddLinearConstraint(const symbolic::Formula& f) throws an exception for f ∈
formulas
.- Template parameter
Derived
: An Eigen Array type of Formula.
- AddLinearCost(*args, **kwargs)
Overloaded function.
AddLinearCost(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression) -> pydrake.solvers.Binding[LinearCost]
Adds a linear cost term of the form a’*x + b.
- Parameter
e
: A linear symbolic expression.
- Precondition:
e is a linear expression a’*x + b, where each entry of x is a decision variable in the mathematical program.
- Returns
The newly added linear constraint, together with the bound variables.
AddLinearCost(self: pydrake.solvers.MathematicalProgram, a: numpy.ndarray[numpy.float64[m, 1]], b: float, vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearCost]
Adds a linear cost term of the form a’*x + b. Applied to a subset of the variables and pushes onto the linear cost data structure.
AddLinearCost(self: pydrake.solvers.MathematicalProgram, a: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearCost]
Adds a linear cost term of the form a’*x. Applied to a subset of the variables and pushes onto the linear cost data structure.
- AddLinearEqualityConstraint(*args, **kwargs)
Overloaded function.
AddLinearEqualityConstraint(self: pydrake.solvers.MathematicalProgram, Aeq: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], beq: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearEqualityConstraint]
AddLinearEqualityConstraint
Adds linear equality constraints referencing potentially a subset of the decision variables.
Example: to add two equality constraints which only depend on two of the elements of x, you could use
Click to expand C++ code...
auto x = prog.NewContinuousVariables(6,"myvar"); Eigen::Matrix2d Aeq; Aeq << -1, 2, 1, 1; Eigen::Vector2d beq(1, 3); // Imposes constraint // -x(0) + 2x(1) = 1 // x(0) + x(1) = 3 prog.AddLinearEqualityConstraint(Aeq, beq, x.head<2>());
AddLinearEqualityConstraint(self: pydrake.solvers.MathematicalProgram, a: numpy.ndarray[numpy.float64[1, n]], beq: float, vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearEqualityConstraint]
Adds one row of linear equality constraint referencing potentially a subset of decision variables.
\[ax = beq\]- Parameter
a
: A row vector.
- Parameter
beq
: A scalar.
- Parameter
vars
: The decision variables on which the constraint is imposed.
AddLinearEqualityConstraint(self: pydrake.solvers.MathematicalProgram, Aeq: scipy.sparse.csc_matrix[numpy.float64], beq: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearEqualityConstraint]
AddLinearEqualityConstraint
Adds linear equality constraints referencing potentially a subset of the decision variables using a sparse A matrix.
AddLinearEqualityConstraint(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, b: float) -> pydrake.solvers.Binding[LinearEqualityConstraint]
Adds one row of linear constraint e = b where
e
is a symbolic expression.- Raises
RuntimeError if 1. e is a non-linear expression. 2. e is a –
constant. –
- Parameter
e
: A linear symbolic expression in the form of
c0 + c1 * x1 + ... + cn * xn
wherec_i
is a constant and @x_i is a variable.- Parameter
b
: A scalar.
- Returns
The newly added linear equality constraint, together with the bound variable.
AddLinearEqualityConstraint(self: pydrake.solvers.MathematicalProgram, f: pydrake.symbolic.Formula) -> pydrake.solvers.Binding[LinearEqualityConstraint]
Adds a linear equality constraint represented by a symbolic formula to the program. The input formula
f
is either an equality formula (e1 == e2
) or a conjunction of equality formulas.It throws an exception if
f
is neither an equality formula nor a conjunction of equalities.f
includes a non-linear expression.
AddLinearEqualityConstraint(self: pydrake.solvers.MathematicalProgram, v: numpy.ndarray[object[m, 1]], b: numpy.ndarray[numpy.float64[m, 1]]) -> pydrake.solvers.Binding[LinearEqualityConstraint]
Adds linear equality constraints \(v = b\), where
v(i)
is a symbolic linear expression.- Raises
RuntimeError if 1. v(i) is a non-linear expression. 2. –
v(i)` is a constant –
- Template parameter
DerivedV
: An Eigen Matrix type of Expression. A column vector.
- Template parameter
DerivedB
: An Eigen Matrix type of double. A column vector.
- Parameter
v
: v(i) is a linear symbolic expression in the form of `` c0 + c1 * x1 + … + cn * xn `` where ci is a constant and @xi is a variable.
- Parameter
b
: A vector of doubles.
- Returns
The newly added linear equality constraint, together with the bound variables.
- AddLinearMatrixInequalityConstraint(self: pydrake.solvers.MathematicalProgram, F: list[numpy.ndarray[numpy.float64[m, n]]], vars: numpy.ndarray[object[m, 1]]) pydrake.solvers.Binding[LinearMatrixInequalityConstraint]
Adds a linear matrix inequality constraint to the program.
- AddLogDeterminantLowerBoundConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous], lower: float) tuple[pydrake.solvers.Binding[LinearConstraint], numpy.ndarray[object[m, 1]], numpy.ndarray[object[m, n]]]
Impose the constraint log(det(X)) >= lower. See log_determinant for more details.
- Parameter
X
: A symmetric positive semidefinite matrix X.
- Parameter
lower
: The lower bound of log(det(X))
- Returns
(constraint, t, Z) constraint is ∑ᵢt(i) >= lower, we also return the newly created slack variables t and the lower triangular matrix Z. Note that Z is not a matrix of symbolic::Variable but symbolic::Expression, because the upper-diagonal entries of Z are not variable, but expression 0.
- Precondition:
X is a symmetric matrix.
- Parameter
- AddLorentzConeConstraint(*args, **kwargs)
Overloaded function.
AddLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, f: pydrake.symbolic.Formula, eval_type: pydrake.solvers.LorentzConeConstraint.EvalType = <EvalType.kConvexSmooth: 1>, psd_tol: float = 1e-08, coefficient_tol: float = 1e-08) -> pydrake.solvers.Binding[LorentzConeConstraint]
Adds a Lorentz cone constraint of the form Ax+b >= |Cx+d|₂ from a symbolic formula with one side which can be decomposed into sqrt((Cx+d)’(Cx+d)).
- Parameter
eval_type
: The evaluation type when evaluating the lorentz cone constraint in generic optimization. Refer to LorentzConeConstraint::EvalType for more details.
See symbolic::DecomposeL2NormExpression for details on the tolerance parameters,
psd_tol
andcoefficient_tol
. Consider using the overload which takes a vector of expressions to avoid the numerical decomposition.- Raises
RuntimeError if f cannot be decomposed into a Lorentz cone. –
AddLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, v: numpy.ndarray[object[m, 1]], eval_type: pydrake.solvers.LorentzConeConstraint.EvalType = <EvalType.kConvexSmooth: 1>) -> pydrake.solvers.Binding[LorentzConeConstraint]
Adds Lorentz cone constraint referencing potentially a subset of the decision variables.
- Parameter
v
: An Eigen::Vector of symbolic::Expression. Constraining that
\[v_0 \ge \sqrt{v_1^2 + ... + v_{n-1}^2}\]- Returns
The newly constructed Lorentz cone constraint with the bounded variables. For example, to add the Lorentz cone constraint
x+1 >= sqrt(y² + 2y + x² + 5), = sqrt((y+1)²+x²+2²) The user could call
Click to expand C++ code...
{cc} Vector4<symbolic::Expression> v(x+1, y+1, x, 2.); prog.AddLorentzConeConstraint(v);
- Parameter
eval_type
: The evaluation type when evaluating the lorentz cone constraint in generic optimization. Refer to LorentzConeConstraint::EvalType for more details.
AddLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, linear_expression: pydrake.symbolic.Expression, quadratic_expression: pydrake.symbolic.Expression, tol: float = 0.0, eval_type: pydrake.solvers.LorentzConeConstraint.EvalType = <EvalType.kConvexSmooth: 1>) -> pydrake.solvers.Binding[LorentzConeConstraint]
Adds Lorentz cone constraint on the linear expression v1 and quadratic expression v2, such that v1 >= sqrt(v2)
- Parameter
linear_expression
: The linear expression v1.
- Parameter
quadratic_expression
: The quadratic expression v2.
- Parameter
tol
: The tolerance to determine if the matrix in v2 is positive semidefinite or not.
See also
DecomposePositiveQuadraticForm for more explanation. $*Default:* is 0.
- Parameter
eval_type
: The evaluation type when evaluating the lorentz cone constraint in generic optimization. Refer to LorentzConeConstraint::EvalType for more details.
- Returns
binding
: The newly added Lorentz cone constraint, together with the bound variables.
Precondition: 1.
v1
is a linear expression, in the form of c’*x + d. 2.v2
is a quadratic expression, in the form ofClick to expand C++ code...
x'*Q*x + b'x + a
Also the quadratic expression has to be convex, namely Q is a positive semidefinite matrix, and the quadratic expression needs to be non-negative for any x.
- Raises
RuntimeError if the preconditions are not satisfied. –
Notice this constraint is equivalent to the vector [z;y] is within a Lorentz cone, where
Click to expand C++ code...
z = v1 y = R * x + d
while (R, d) satisfies y’*y = x’*Q*x + b’*x + a For example, to add the Lorentz cone constraint
x+1 >= sqrt(y² + 2y + x² + 4), the user could call
Click to expand C++ code...
{cc} prog.AddLorentzConeConstraint(x+1, pow(y, 2) + 2 * y + pow(x, 2) + 4);
AddLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]], eval_type: pydrake.solvers.LorentzConeConstraint.EvalType = <EvalType.kConvexSmooth: 1>) -> pydrake.solvers.Binding[LorentzConeConstraint]
Adds Lorentz cone constraint referencing potentially a subset of the decision variables (defined in the vars parameter). The linear expression \(z=Ax+b\) is in the Lorentz cone. A vector \(z \in\mathbb{R}^n\) is in the Lorentz cone, if
\[z_0 \ge \sqrt{z_1^2 + ... + z_{n-1}^2}\]- Parameter
A
: A \(\mathbb{R}^{n\times m}\) matrix, whose number of columns equals to the size of the decision variables.
- Parameter
b
: A \(\mathbb{R}^n\) vector, whose number of rows equals to the size of the decision variables.
- Parameter
vars
: The Eigen vector of \(m\) decision variables.
- Parameter
eval_type
: The evaluation type when evaluating the lorentz cone constraint in generic optimization. Refer to LorentzConeConstraint::EvalType for more details.
- Returns
The newly added Lorentz cone constraint.
For example, to add the Lorentz cone constraint
x+1 >= sqrt(y² + 2y + x² + 5) = sqrt((y+1)² + x² + 2²), the user could call
Click to expand C++ code...
{cc} Eigen::Matrix<double, 4, 2> A; Eigen::Vector4d b; A << 1, 0, 0, 1, 1, 0, 0, 0; b << 1, 1, 0, 2; // A * [x;y] + b = [x+1; y+1; x; 2] prog.AddLorentzConeConstraint(A, b, Vector2<symbolic::Variable>(x, y));
- AddMaximizeGeometricMeanCost(*args, **kwargs)
Overloaded function.
AddMaximizeGeometricMeanCost(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], x: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[LinearCost]
An overloaded version of maximize_geometric_mean.
- Returns
cost The added cost (note that since MathematicalProgram only minimizes the cost, the returned cost evaluates to -power(∏ᵢz(i), 1/n) where z = A*x+b.
- Precondition:
A.rows() == b.rows(), A.rows() >= 2.
AddMaximizeGeometricMeanCost(self: pydrake.solvers.MathematicalProgram, x: numpy.ndarray[object[m, 1]], c: float) -> pydrake.solvers.Binding[LinearCost]
An overloaded version of maximize_geometric_mean. We add the cost to maximize the geometric mean of x, i.e., c*power(∏ᵢx(i), 1/n).
- Parameter
c
: The positive coefficient of the geometric mean cost, $*Default:* is 1.
- Returns
cost The added cost (note that since MathematicalProgram only minimizes the cost, the returned cost evaluates to -c * power(∏ᵢx(i), 1/n).
- Precondition:
x.rows() >= 2.
- Precondition:
c > 0.
- AddMaximizeLogDeterminantCost(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) tuple[pydrake.solvers.Binding[LinearCost], numpy.ndarray[object[m, 1]], numpy.ndarray[object[m, n]]]
Maximize the log determinant. See log_determinant for more details.
- Parameter
X
: A symmetric positive semidefinite matrix X, whose log(det(X)) will be maximized.
- Returns
(cost, t, Z) cost is -∑ᵢt(i), we also return the newly created slack variables t and the lower triangular matrix Z. Note that Z is not a matrix of symbolic::Variable but symbolic::Expression, because the upper-diagonal entries of Z are not variable, but expression 0.
- Precondition:
X is a symmetric matrix.
- Parameter
- AddPositiveDiagonallyDominantDualConeMatrixConstraint(*args, **kwargs)
Overloaded function.
AddPositiveDiagonallyDominantDualConeMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[LinearConstraint]
This is an overloaded variant of add_dd_dual “diagonally dominant dual cone constraint”
- Parameter
X
: The matrix X. We will use 0.5(X+Xᵀ) as the “symmetric version” of X.
- Precondition:
X(i, j) should be a linear expression of decision variables.
- Returns
A linear constraint of size n² encoding vᵢᵀXvᵢ ≥ 0
AddPositiveDiagonallyDominantDualConeMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[LinearConstraint]
This is an overloaded variant of add_dd_dual “diagonally dominant dual cone constraint”
- Parameter
X
: The matrix X. We will use 0.5(X+Xᵀ) as the “symmetric version” of X.
- Returns
A linear constraint of size n² encoding vᵢᵀXvᵢ ≥ 0
- AddPositiveDiagonallyDominantMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) numpy.ndarray[object[m, n]]
Adds the constraint that a symmetric matrix is diagonally dominant with non-negative diagonal entries. A symmetric matrix X is diagonally dominant with non-negative diagonal entries if X(i, i) >= ∑ⱼ |X(i, j)| ∀ j ≠ i namely in each row, the diagonal entry is larger than the sum of the absolute values of all other entries in the same row. A matrix being diagonally dominant with non-negative diagonals is a sufficient (but not necessary) condition of a matrix being positive semidefinite. Internally we will create a matrix Y as slack variables, such that Y(i, j) represents the absolute value |X(i, j)| ∀ j ≠ i. The diagonal entries Y(i, i) = X(i, i) The users can refer to “DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization” by Amir Ali Ahmadi and Anirudha Majumdar, with arXiv link https://arxiv.org/abs/1706.02586
- Parameter
X
: The matrix X. We will use 0.5(X+Xᵀ) as the “symmetric version” of X.
- Returns
Y The slack variable. Y(i, j) represents |X(i, j)| ∀ j ≠ i, with the constraint Y(i, j) >= X(i, j) and Y(i, j) >= -X(i, j). Y is a symmetric matrix. The diagonal entries Y(i, i) = X(i, i)
- Parameter
- AddPositiveSemidefiniteConstraint(*args, **kwargs)
Overloaded function.
AddPositiveSemidefiniteConstraint(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[PositiveSemidefiniteConstraint]
Adds a positive semidefinite constraint on a symmetric matrix.
- Raises
RuntimeError in Debug mode if symmetric_matrix_var is not –
symmetric. –
- Parameter
symmetric_matrix_var
: A symmetric MatrixDecisionVariable object.
AddPositiveSemidefiniteConstraint(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, n], flags.f_contiguous]) -> pydrake.solvers.Binding[PositiveSemidefiniteConstraint]
Adds a positive semidefinite constraint on a symmetric matrix of symbolic expressions
e
. We create a new symmetric matrix of variables M being positive semidefinite, with the linear equality constraint e == M.- Parameter
e
: Imposes constraint “e is positive semidefinite”.
- Precondition:
{1. e is symmetric. 2. e(i, j) is linear for all i, j }
- Returns
The newly added positive semidefinite constraint, with the bound variable M that are also newly added.
For example, to add a constraint that
⌈x + 1 2x + 3 x+y⌉ |2x+ 3 2 0| is positive semidefinite ⌊x + y 0 x⌋ The user could call
Click to expand C++ code...
{cc} Matrix3<symbolic::Expression> e e << x+1, 2*x+3, x+y, 2*x+3, 2, 0, x+y, 0, x; prog.AddPositiveSemidefiniteConstraint(e);
- AddPrincipalSubmatrixIsPsdConstraint(*args, **kwargs)
Overloaded function.
AddPrincipalSubmatrixIsPsdConstraint(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, n], flags.f_contiguous], arg1: set[int]) -> pydrake.solvers.Binding[PositiveSemidefiniteConstraint]
Adds a constraint that the principal submatrix of a symmetric matrix composed of the indices in minor_indices is positive semidefinite.
- Precondition:
The passed
symmetric_matrix_var
is a symmetric matrix.- Precondition:
All values in
minor_indices
lie in the range [0, symmetric_matrix_var.rows() - 1].- Parameter
symmetric_matrix_var
: A symmetric MatrixDecisionVariable object.
See also
AddPositiveSemidefiniteConstraint.
AddPrincipalSubmatrixIsPsdConstraint(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, n], flags.f_contiguous], arg1: set[int]) -> pydrake.solvers.Binding[PositiveSemidefiniteConstraint]
Adds a constraint the that the principal submatrix of a symmetric matrix of expressions composed of the indices in minor_indices is positive semidefinite.
- Precondition:
The passed
symmetric_matrix_var
is a symmetric matrix.- Precondition:
All values in
minor_indices
lie in the range [0, symmetric_matrix_var.rows() - 1].- Parameter
e
: Imposes constraint “e is positive semidefinite”.
See also
AddPositiveSemidefiniteConstraint.
- AddQuadraticAsRotatedLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, Q: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], c: float, vars: numpy.ndarray[object[m, 1]], psd_tol: float = 0.0) pydrake.solvers.Binding[RotatedLorentzConeConstraint]
Add the convex quadratic constraint 0.5xᵀQx + bᵀx + c <= 0 as a rotated Lorentz cone constraint [rᵀx+s, 1, Px+q] is in the rotated Lorentz cone. When solving the optimization problem using conic solvers (like Mosek, Gurobi, SCS, etc), it is numerically preferable to impose the convex quadratic constraint as rotated Lorentz cone constraint. See https://docs.mosek.com/latest/capi/prob-def-quadratic.html#a-recommendation
- Raises
exception if this quadratic constraint is not convex (Q is not –
positive semidefinite) –
- Parameter
Q
: The Hessian of the quadratic constraint. Should be positive semidefinite.
- Parameter
b
: The linear coefficient of the quadratic constraint.
- Parameter
c
: The constant term of the quadratic constraint.
- Parameter
vars
: x in the documentation above.
- Parameter
psd_tol
: If the minimal eigenvalue of Q is smaller than -psd_tol, then throw an exception. $*Default:* = 0.
- AddQuadraticConstraint(*args, **kwargs)
Overloaded function.
AddQuadraticConstraint(self: pydrake.solvers.MathematicalProgram, Q: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], lb: float, ub: float, vars: numpy.ndarray[object[m, 1]], hessian_type: Optional[pydrake.solvers.QuadraticConstraint.HessianType] = None) -> pydrake.solvers.Binding[QuadraticConstraint]
Adds quadratic constraint lb ≤ .5 xᵀQx + bᵀx ≤ ub Notice that if your quadratic constraint is convex, and you intend to solve the problem with a convex solver (like Mosek), then it is better to reformulate it with a second order cone constraint. See https://docs.mosek.com/10.1/capi/prob-def-quadratic.html#a-recommendation for an explanation.
- Parameter
vars
: x in the documentation above.
- Parameter
hessian_type
: Whether the Hessian is positive semidefinite, negative semidefinite or indefinite. Drake will check the type if hessian_type=std::nullopt. Specifying the hessian type will speed this method up.
- Precondition:
hessian_type should be correct if it is not std::nullopt, as we will blindly trust it in the downstream code.
AddQuadraticConstraint(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, lb: float, ub: float, hessian_type: Optional[pydrake.solvers.QuadraticConstraint.HessianType] = None) -> pydrake.solvers.Binding[QuadraticConstraint]
Overloads AddQuadraticConstraint, impose lb <= e <= ub where
e
is a quadratic expression. Notice that if your quadratic constraint is convex, and you intend to solve the problem with a convex solver (like Mosek), then it is better to reformulate it with a second order cone constraint. See https://docs.mosek.com/10.1/capi/prob-def-quadratic.html#a-recommendation for an explanation.
- AddQuadraticCost(*args, **kwargs)
Overloaded function.
AddQuadraticCost(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, is_convex: Optional[bool] = None) -> pydrake.solvers.Binding[QuadraticCost]
Add a quadratic cost term of the form 0.5*x’*Q*x + b’*x + c.
- Parameter
e
: A quadratic symbolic expression.
- Parameter
is_convex
: Whether the cost is already known to be convex. If is_convex=nullopt (the default), then Drake will determine if
e
is a convex quadratic cost or not. To improve the computation speed, the user can set is_convex if the user knows whether the cost is convex or not.
- Raises
RuntimeError if the expression is not quadratic. –
- Returns
The newly added cost together with the bound variables.
AddQuadraticCost(self: pydrake.solvers.MathematicalProgram, Q: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]], is_convex: Optional[bool] = None) -> pydrake.solvers.Binding[QuadraticCost]
Adds a cost term of the form 0.5*x’*Q*x + b’x Applied to subset of the variables.
- Parameter
is_convex
: Whether the cost is already known to be convex. If is_convex=nullopt (the default), then Drake will determine if this is a convex quadratic cost or not. To improve the computation speed, the user can set is_convex if the user knows whether the cost is convex or not.
AddQuadraticCost(self: pydrake.solvers.MathematicalProgram, Q: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], c: float, vars: numpy.ndarray[object[m, 1]], is_convex: Optional[bool] = None) -> pydrake.solvers.Binding[QuadraticCost]
Adds a cost term of the form 0.5*x’*Q*x + b’x + c Applied to subset of the variables.
- Parameter
is_convex
: Whether the cost is already known to be convex. If is_convex=nullopt (the default), then Drake will determine if this is a convex quadratic cost or not. To improve the computation speed, the user can set is_convex if the user knows whether the cost is convex or not.
- AddQuadraticErrorCost(self: pydrake.solvers.MathematicalProgram, Q: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], x_desired: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) pydrake.solvers.Binding[QuadraticCost]
Adds a cost term of the form (x-x_desired)’*Q*(x-x_desired).
- AddRotatedLorentzConeConstraint(*args, **kwargs)
Overloaded function.
AddRotatedLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, linear_expression1: pydrake.symbolic.Expression, linear_expression2: pydrake.symbolic.Expression, quadratic_expression: pydrake.symbolic.Expression, tol: float = 0) -> pydrake.solvers.Binding[RotatedLorentzConeConstraint]
Adds rotated Lorentz cone constraint on the linear expression v1, v2 and quadratic expression u, such that v1 * v2 >= u, v1 >= 0, v2 >= 0
- Parameter
linear_expression1
: The linear expression v1.
- Parameter
linear_expression2
: The linear expression v2.
- Parameter
quadratic_expression
: The quadratic expression u.
- Parameter
tol
: The tolerance to determine if the matrix in v2 is positive semidefinite or not.
See also
DecomposePositiveQuadraticForm for more explanation. $*Default:* is 0.
- Returns
binding
: The newly added rotated Lorentz cone constraint, together with the bound variables.
Precondition: 1.
linear_expression1
is a linear (affine) expression, in the form ofv1 = c1’*x + d1.
linear_expression2
is a linear (affine) expression, in the form of v2 = c2’*x + d2.
quadratic_expression
is a quadratic expression, in the form of
Click to expand C++ code...
u = x'*Q*x + b'x + a
Also the quadratic expression has to be convex, namely Q is a positive semidefinite matrix, and the quadratic expression needs to be non-negative for any x.
- Raises
RuntimeError if the preconditions are not satisfied. –
For example, to add the rotated Lorentz cone constraint
(x+1)(x+y) >= x²+z²+2z+5 x+1 >= 0 x+y >= 0 The user could call
Click to expand C++ code...
{cc} prog.AddRotatedLorentzConeConstraint(x+1, x+y, pow(x, 2) + pow(z, 2) + 2*z+5);
AddRotatedLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, v: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[RotatedLorentzConeConstraint]
Adds a constraint that a symbolic expression
- Parameter
v
: is in the rotated Lorentz cone, i.e.,
\[v_0v_1 \ge v_2^2 + ... + v_{n-1}^2\\]v_0 ge 0, v_1 ge 0
- Parameter
v
: A linear expression of variables, \(v = A x + b\), where \(A, b\) are given matrices of the correct size, \(x\) is the vector of decision variables.
- Returns
binding
: The newly added rotated Lorentz cone constraint, together with the bound variables.
For example, to add the rotated Lorentz cone constraint
(x+1)(x+y) >= x²+z²+2z+5 = x² + (z+1)² + 2² x+1 >= 0 x+y >= 0 The user could call
Click to expand C++ code...
{cc} Eigen::Matrix<symbolic::Expression, 5, 1> v; v << x+1, x+y, x, z+1, 2; prog.AddRotatedLorentzConeConstraint(v);
AddRotatedLorentzConeConstraint(self: pydrake.solvers.MathematicalProgram, A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], vars: numpy.ndarray[object[m, 1]]) -> pydrake.solvers.Binding[RotatedLorentzConeConstraint]
Adds a rotated Lorentz cone constraint referencing potentially a subset of decision variables, The linear expression \(z=Ax+b\) is in rotated Lorentz cone. A vector \(z \in\mathbb{R}^n\) is in the rotated Lorentz cone, if
\[z_0z_1 \ge z_2^2 + ... + z_{n-1}^2\]where \(A\in\mathbb{R}^{n\times m}, b\in\mathbb{R}^n\) are given matrices.
- Parameter
A
: A matrix whose number of columns equals to the size of the decision variables.
- Parameter
b
: A vector whose number of rows equals to the size of the decision variables.
- Parameter
vars
: The decision variables on which the constraint is imposed.
- AddScaledDiagonallyDominantDualConeMatrixConstraint(*args, **kwargs)
Overloaded function.
AddScaledDiagonallyDominantDualConeMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) -> list[pydrake.solvers.Binding[RotatedLorentzConeConstraint]]
This is an overloaded variant of add_sdd_dual “scaled diagonally dominant dual cone constraint”
- Parameter
X
: The matrix X. We will use 0.5(X+Xᵀ) as the “symmetric version” of X.
- Precondition:
X(i, j) should be a linear expression of decision variables.
- Returns
A vector of RotatedLorentzConeConstraint constraints of length 1/2 * n * (n-1) encoding VᵢⱼᵀXVᵢⱼ is psd
AddScaledDiagonallyDominantDualConeMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) -> list[pydrake.solvers.Binding[RotatedLorentzConeConstraint]]
This is an overloaded variant of add_sdd_dual “scaled diagonally dominant dual cone constraint”
- Parameter
X
: The matrix X. We will use 0.5(X+Xᵀ) as the “symmetric version” of X.
- Returns
A vector of RotatedLorentzConeConstraint constraints of length 1/2 * n * (n-1) encoding VᵢⱼᵀXVᵢⱼ is psd
- AddScaledDiagonallyDominantMatrixConstraint(*args, **kwargs)
Overloaded function.
AddScaledDiagonallyDominantMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) -> list[list[numpy.ndarray[object[2, 2]]]]
This is an overloaded variant of addsdd “scaled diagonally dominant matrix constraint”
- Parameter
X
: The matrix X to be constrained scaled diagonally dominant. X.
- Precondition:
X(i, j) should be a linear expression of decision variables.
- Returns
M A vector of vectors of 2 x 2 symmetric matrices M. For i < j, M[i][j] is
Click to expand C++ code...
[Mⁱʲ(i, i), Mⁱʲ(i, j)] [Mⁱʲ(i, j), Mⁱʲ(j, j)].
Note that M[i][j](0, 1) = Mⁱʲ(i, j) = (X(i, j) + X(j, i)) / 2 for i >= j, M[i][j] is the zero matrix.
AddScaledDiagonallyDominantMatrixConstraint(self: pydrake.solvers.MathematicalProgram, X: numpy.ndarray[object[m, n], flags.f_contiguous]) -> list[list[numpy.ndarray[object[2, 2]]]]
This is an overloaded variant of addsdd “scaled diagonally dominant matrix constraint”
- Parameter
X
: The symmetric matrix X to be constrained scaled diagonally dominant.
- Returns
M For i < j M[i][j] contains the slack variables, mentioned in addsdd “scaled diagonally dominant matrix constraint”. For i >= j, M[i][j] contains default-constructed variables (with get_id() == 0).
- AddSosConstraint(*args, **kwargs)
Overloaded function.
AddSosConstraint(self: pydrake.solvers.MathematicalProgram, p: pydrake.symbolic.Polynomial, monomial_basis: numpy.ndarray[object[m, 1]], type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>, gram_name: str = ‘S’) -> numpy.ndarray[object[m, n]]
Adds constraints that a given polynomial
p
is a sums-of-squares (SOS), that is,p
can be decomposed intomᵀQm
, where m is themonomial_basis
. It returns the coefficients matrix Q, which is positive semidefinite.- Parameter
type
: The type of the polynomial. $*Default:* is kSos, but the user can also use kSdsos and kDsos. Refer to NonnegativePolynomial for details on different types of sos polynomials.
- Parameter
gram_name
: The name of the gram matrix for print out.
Note
It calls
Reparse
to enforcep
to have this MathematicalProgram’s indeterminates if necessary.AddSosConstraint(self: pydrake.solvers.MathematicalProgram, p: pydrake.symbolic.Polynomial, type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>, gram_name: str = ‘S’) -> tuple[numpy.ndarray[object[m, n]], numpy.ndarray[object[m, 1]]]
Adds constraints that a given polynomial
p
is a sums-of-squares (SOS), that is,p
can be decomposed intomᵀQm
, where m is a monomial basis selected from the sparsity ofp
. It returns a pair of constraint bindings expressing: - The coefficients matrix Q, which is positive semidefinite. - The monomial basis m.- Parameter
type
: The type of the polynomial. $*Default:* is kSos, but the user can also use kSdsos and kDsos. Refer to NonnegativePolynomial for the details on different type of sos polynomials.
- Parameter
gram_name
: The name of the gram matrix for print out.
Note
It calls
Reparse
to enforcep
to have this MathematicalProgram’s indeterminates if necessary.AddSosConstraint(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, monomial_basis: numpy.ndarray[object[m, 1]], type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>, gram_name: str = ‘S’) -> numpy.ndarray[object[m, n]]
Adds constraints that a given symbolic expression
e
is a sums-of-squares (SOS), that is,p
can be decomposed intomᵀQm
, where m is themonomial_basis
. Note that it decomposese
into a polynomial with respect toindeterminates()
in this mathematical program. It returns the coefficients matrix Q, which is positive semidefinite.- Parameter
type
: Refer to NonnegativePolynomial class documentation.
- Parameter
gram_name
: The name of the gram matrix for print out.
AddSosConstraint(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression, type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>, gram_name: str = ‘S’) -> tuple[numpy.ndarray[object[m, n]], numpy.ndarray[object[m, 1]]]
Adds constraints that a given symbolic expression
e
is a sums-of-squares (SOS), that is,e
can be decomposed intomᵀQm
. Note that it decomposese
into a polynomial with respect toindeterminates()
in this mathematical program. It returns a pair expressing: - The coefficients matrix Q, which is positive semidefinite. - The monomial basis m.- Parameter
type
: Refer to NonnegativePolynomial class documentation.
- Parameter
gram_name
: The name of the gram matrix for print out.
- AddVisualizationCallback(self: pydrake.solvers.MathematicalProgram, arg0: Callable[[numpy.ndarray[numpy.float64[m, 1]]], None], arg1: numpy.ndarray[object[m, 1]]) pydrake.solvers.Binding[VisualizationCallback]
Adds a callback method to visualize intermediate results of the optimization.
Note
Just like other costs/constraints, not all solvers support callbacks. Adding a callback here will force MathematicalProgram::Solve to select a solver that support callbacks. For instance, adding a visualization callback to a quadratic programming problem may result in using a nonlinear programming solver as the default solver.
- Parameter
callback
: a std::function that accepts an Eigen::Vector of doubles representing the bound decision variables.
- Parameter
vars
: the decision variables that should be passed to the callback.
- Parameter
- bounding_box_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[BoundingBoxConstraint]]
Getter for all bounding box constraints
- CheckSatisfied(*args, **kwargs)
Overloaded function.
CheckSatisfied(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[Constraint], prog_var_vals: numpy.ndarray[numpy.float64[m, 1]], tol: float = 1e-06) -> bool
Evaluates CheckSatisfied for the constraint in
binding
using the value of ALL of the decision variables in this program.- Raises
RuntimeError if the size of prog_var_vals is invalid. –
CheckSatisfied(self: pydrake.solvers.MathematicalProgram, bindings: list[pydrake.solvers.Binding[Constraint]], prog_var_vals: numpy.ndarray[numpy.float64[m, 1]], tol: float = 1e-06) -> bool
Evaluates CheckSatisfied for the all of the constraints in
binding
using the value of ALL of the decision variables in this program.- Returns
true iff all of the constraints are satisfied.
- Raises
RuntimeError if the size of prog_var_vals is invalid. –
- CheckSatisfiedAtInitialGuess(*args, **kwargs)
Overloaded function.
CheckSatisfiedAtInitialGuess(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[Constraint], tol: float = 1e-06) -> bool
Evaluates CheckSatisfied for the constraint in
binding
at the initial guess.CheckSatisfiedAtInitialGuess(self: pydrake.solvers.MathematicalProgram, bindings: list[pydrake.solvers.Binding[Constraint]], tol: float = 1e-06) -> bool
Evaluates CheckSatisfied for the all of the constraints in
bindings
at the initial guess.- Returns
true iff all of the constraints are satisfied.
- ClearVariableScaling(self: pydrake.solvers.MathematicalProgram) None
Clears the scaling factors for decision variables.
See variable_scaling “Variable scaling” for more information.
- decision_variable(self: pydrake.solvers.MathematicalProgram, i: int) pydrake.symbolic.Variable
Getter for the decision variable with index
i
in the program.
- decision_variable_index(self: pydrake.solvers.MathematicalProgram) dict[int, int]
Returns the mapping from a decision variable ID to its index in the vector containing all the decision variables in the mathematical program.
- decision_variables(self: pydrake.solvers.MathematicalProgram) numpy.ndarray[object[m, 1]]
Getter for all decision variables in the program.
- EvalBinding(*args, **kwargs)
Overloaded function.
EvalBinding(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[EvaluatorBase], prog_var_vals: numpy.ndarray[numpy.float64[m, 1]]) -> numpy.ndarray[numpy.float64[m, 1]]
Evaluates the value of some binding, for some input value for all decision variables.
- Parameter
binding
: A Binding whose variables are decision variables in this program.
- Parameter
prog_var_vals
: The value of all the decision variables in this program.
- Raises
RuntimeError if the size of prog_var_vals is invalid. –
EvalBinding(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[EvaluatorBase], prog_var_vals: numpy.ndarray[object[m, 1]]) -> numpy.ndarray[object[m, 1]]
Evaluates the value of some binding, for some input value for all decision variables.
- Parameter
binding
: A Binding whose variables are decision variables in this program.
- Parameter
prog_var_vals
: The value of all the decision variables in this program.
- Raises
RuntimeError if the size of prog_var_vals is invalid. –
- EvalBindingAtInitialGuess(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[EvaluatorBase]) numpy.ndarray[numpy.float64[m, 1]]
Evaluates the evaluator in
binding
at the initial guess.- Returns
The value of
binding
at the initial guess.
- EvalBindings(*args, **kwargs)
Overloaded function.
EvalBindings(self: pydrake.solvers.MathematicalProgram, bindings: list[pydrake.solvers.Binding[EvaluatorBase]], prog_var_vals: numpy.ndarray[numpy.float64[m, 1]]) -> numpy.ndarray[numpy.float64[m, 1]]
Evaluates a set of bindings (plural version of
EvalBinding
).- Parameter
bindings
: List of bindings.
- Parameter
prog
: $Parameter
prog_var_vals
:
The value of all the decision variables in this program.
- Returns
All binding values, concatenated into a single vector.
- Raises
RuntimeError if the size of prog_var_vals is invalid. –
EvalBindings(self: pydrake.solvers.MathematicalProgram, bindings: list[pydrake.solvers.Binding[EvaluatorBase]], prog_var_vals: numpy.ndarray[object[m, 1]]) -> numpy.ndarray[object[m, 1]]
Evaluates a set of bindings (plural version of
EvalBinding
).- Parameter
bindings
: List of bindings.
- Parameter
prog
: $Parameter
prog_var_vals
:
The value of all the decision variables in this program.
- Returns
All binding values, concatenated into a single vector.
- Raises
RuntimeError if the size of prog_var_vals is invalid. –
- EvalBindingVectorized(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[EvaluatorBase], prog_var_vals: numpy.ndarray[numpy.float64[m, n]]) numpy.ndarray[numpy.float64[m, n]]
A “vectorized” version of EvalBinding. It evaluates the binding for every column of
prog_var_vals
.
- exponential_cone_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[ExponentialConeConstraint]]
Getter for exponential cone constraints.
- FindDecisionVariableIndex(self: pydrake.solvers.MathematicalProgram, var: pydrake.symbolic.Variable) int
Returns the index of the decision variable. Internally the solvers thinks all variables are stored in an array, and it accesses each individual variable using its index. This index is used when adding constraints and costs for each solver.
- Precondition:
{
var
is a decision variable in the mathematical program, otherwise this function throws a runtime error.}
- FindDecisionVariableIndices(self: pydrake.solvers.MathematicalProgram, vars: numpy.ndarray[object[m, 1]]) list[int]
Returns the indices of the decision variables. Internally the solvers thinks all variables are stored in an array, and it accesses each individual variable using its index. This index is used when adding constraints and costs for each solver.
- Precondition:
{
vars
are decision variables in the mathematical program, otherwise this function throws a runtime error.}
- FindIndeterminateIndex(self: pydrake.solvers.MathematicalProgram, var: pydrake.symbolic.Variable) int
Returns the index of the indeterminate. Internally a solver thinks all indeterminates are stored in an array, and it accesses each individual indeterminate using its index. This index is used when adding constraints and costs for each solver.
- Precondition:
var
is a indeterminate in the mathematical program, otherwise this function throws a runtime error.
- generic_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[Constraint]]
Getter for all generic constraints
- generic_costs(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[Cost]]
Getter for all generic costs.
- GetAllConstraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[Constraint]]
Getter for returning all constraints.
- Returns
Vector of all constraint bindings.
Note
The group ordering may change as more constraint types are added.
- GetAllCosts(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[Cost]]
Getter returning all costs.
- Returns
Vector of all cost bindings.
Note
The group ordering may change as more cost types are added.
- GetBindingVariableValues(self: pydrake.solvers.MathematicalProgram, binding: pydrake.solvers.Binding[EvaluatorBase], prog_var_vals: numpy.ndarray[numpy.float64[m, 1]]) numpy.ndarray[numpy.float64[m, 1]]
Given the value of all decision variables, namely this.decision_variable(i) takes the value prog_var_vals(i), returns the vector that contains the value of the variables in binding.variables().
- Parameter
binding
: binding.variables() must be decision variables in this MathematicalProgram.
- Parameter
prog_var_vals
: The value of ALL the decision variables in this program.
- Returns
binding_variable_vals binding_variable_vals(i) is the value of binding.variables()(i) in prog_var_vals.
- Parameter
- GetInitialGuess(*args, **kwargs)
Overloaded function.
GetInitialGuess(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.symbolic.Variable) -> float
Gets the initial guess for a single variable.
- Precondition:
decision_variable
has been registered in the optimization program.
- Raises
RuntimeError if the pre condition is not satisfied. –
GetInitialGuess(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, 1]]) -> numpy.ndarray[numpy.float64[m, 1]]
Gets the initial guess for some variables.
- Precondition:
Each variable in
decision_variable_mat
has been registered in the optimization program.
- Raises
RuntimeError if the pre condition is not satisfied. –
GetInitialGuess(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, n]]) -> numpy.ndarray[numpy.float64[m, n]]
Gets the initial guess for some variables.
- Precondition:
Each variable in
decision_variable_mat
has been registered in the optimization program.
- Raises
RuntimeError if the pre condition is not satisfied. –
- GetLinearConstraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LinearConstraint]]
Getter returning all linear constraints (both linear equality and inequality constraints). Note that this does not include bounding box constraints, which are technically also linear.
- Returns
Vector of all linear constraint bindings.
- GetSolverOptions(*args, **kwargs)
Overloaded function.
GetSolverOptions(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.solvers.SolverId) -> dict
GetSolverOptions(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.solvers.SolverType) -> dict
- GetVariableScaling(self: pydrake.solvers.MathematicalProgram) dict[int, float]
Returns the mapping from a decision variable index to its scaling factor.
See variable_scaling “Variable scaling” for more information.
- indeterminate(self: pydrake.solvers.MathematicalProgram, i: int) pydrake.symbolic.Variable
Getter for the indeterminate with index
i
in the program.
- indeterminates(self: pydrake.solvers.MathematicalProgram) numpy.ndarray[object[m, 1]]
Getter for all indeterminates in the program.
- indeterminates_index(self: pydrake.solvers.MathematicalProgram) dict[int, int]
Returns the mapping from an indeterminate ID to its index in the vector containing all the indeterminates in the mathematical program.
- initial_guess(self: pydrake.solvers.MathematicalProgram) numpy.ndarray[numpy.float64[m, 1]]
Getter for the initial guess
- IsThreadSafe(self: pydrake.solvers.MathematicalProgram) bool
Returns whether it is safe to solve this mathematical program concurrently. A mathematical program is safe to solve concurrently if all of its cost, constraints, and visualization callbacks are marked as thread safe.
- l2norm_costs(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[L2NormCost]]
Getter for l2norm costs.
- linear_complementarity_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LinearComplementarityConstraint]]
Getter for all linear complementarity constraints.
- linear_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LinearConstraint]]
Getter for linear inequality constraints. Note that this does not include linear_equality_constraints() nor bounding_box_constraints(). See also GetAllLinearConstraints().
- linear_costs(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LinearCost]]
Getter for linear costs.
- linear_equality_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LinearEqualityConstraint]]
Getter for linear equality constraints. Note that this only includes constraints that were added explicitly as LinearEqualityConstraint or which were added symbolically (and their equality constraint nature was uncovered). There may be bounding_box_constraints() and linear_constraints() whose lower bounds also equal their upper bounds.
- linear_matrix_inequality_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LinearMatrixInequalityConstraint]]
Getter for linear matrix inequality constraints.
- lorentz_cone_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[LorentzConeConstraint]]
Getter for Lorentz cone constraints.
- MakePolynomial(self: pydrake.solvers.MathematicalProgram, e: pydrake.symbolic.Expression) pydrake.symbolic.Polynomial
Creates a symbolic polynomial from the given expression
e
. It uses this MathematicalProgram’sindeterminates()
in constructing the polynomial.This method helps a user create a polynomial with the right set of indeterminates which are declared in this MathematicalProgram. We recommend users to use this method over an explicit call to Polynomial constructors to avoid a possible mismatch between this MathematicalProgram’s indeterminates and the user-specified indeterminates (or unspecified, which then includes all symbolic variables in the expression
e
). Consider the following example.e = ax + bx + c
MP.indeterminates() = {x} MP.decision_variables() = {a, b}
MP.MakePolynomial(e)
create a polynomial,(a + b)x + c
. Here onlyx
is an indeterminate of this polynomial.In contrast,
symbolic::Polynomial(e)
returnsax + bx + c
where all variables{a, b, x}
are indeterminates. Note that this is problematic as its indeterminates,{a, b, x}
and the MathematicalProgram’s decision variables,{a, b}
overlap.
Note
This function does not require that the decision variables in
e
is a subset of the decision variables in MathematicalProgram.
- NewBinaryVariables(*args, **kwargs)
Overloaded function.
NewBinaryVariables(self: pydrake.solvers.MathematicalProgram, rows: int, name: str = ‘b’) -> numpy.ndarray[object[m, 1]]
Adds binary variables to this MathematicalProgram. The new variables are viewed as a column vector, with size
rows
x 1.See also
NewBinaryVariables(int rows, int cols, const std::vector<std::string>& names);
NewBinaryVariables(self: pydrake.solvers.MathematicalProgram, rows: int, cols: int, name: str = ‘b’) -> numpy.ndarray[object[m, n]]
Adds binary variables, appending them to an internal vector of any existing vars. The initial guess values for the new variables are set to NaN, to indicate that an initial guess has not been assigned. Callers are expected to add costs and/or constraints to have any effect during optimization. Callers can also set the initial guess of the decision variables through SetInitialGuess() or SetInitialGuessForAllVariables().
- Template parameter
Rows
: The number of rows in the new variables.
- Template parameter
Cols
: The number of columns in the new variables.
- Parameter
rows
: The number of rows in the new variables.
- Parameter
cols
: The number of columns in the new variables.
- Parameter
name
: The commonly shared name of the new variables.
- Returns
The MatrixDecisionVariable of size rows x cols, containing the new vars (not all the vars stored).
Example:
Click to expand C++ code...
MathematicalProgram prog; auto b = prog.NewBinaryVariables(2, 3, "b");
This adds a 2 x 3 matrix decision variables into the program.
The name of the variable is only used for the user in order to ease readability.
- NewContinuousVariables(*args, **kwargs)
Overloaded function.
NewContinuousVariables(self: pydrake.solvers.MathematicalProgram, rows: int, name: str = ‘x’) -> numpy.ndarray[object[m, 1]]
Adds continuous variables, appending them to an internal vector of any existing vars. The initial guess values for the new variables are set to NaN, to indicate that an initial guess has not been assigned. Callers are expected to add costs and/or constraints to have any effect during optimization. Callers can also set the initial guess of the decision variables through SetInitialGuess() or SetInitialGuessForAllVariables().
- Parameter
rows
: The number of rows in the new variables.
- Parameter
name
: The name of the newly added variables
- Returns
The VectorDecisionVariable of size rows x 1, containing the new vars (not all the vars stored).
Example:
Click to expand C++ code...
MathematicalProgram prog; auto x = prog.NewContinuousVariables(2, "x");
This adds a 2 x 1 vector containing decision variables into the program. The names of the variables are “x(0)” and “x(1)”.
The name of the variable is only used for the user in order to ease readability.
NewContinuousVariables(self: pydrake.solvers.MathematicalProgram, rows: int, cols: int, name: str = ‘x’) -> numpy.ndarray[object[m, n]]
Adds continuous variables, appending them to an internal vector of any existing vars. The initial guess values for the new variables are set to NaN, to indicate that an initial guess has not been assigned. Callers are expected to add costs and/or constraints to have any effect during optimization. Callers can also set the initial guess of the decision variables through SetInitialGuess() or SetInitialGuessForAllVariables().
- Template parameter
Rows
: The number of rows of the new variables, in the compile time.
- Template parameter
Cols
: The number of columns of the new variables, in the compile time.
- Parameter
rows
: The number of rows in the new variables. When Rows is not Eigen::Dynamic, rows is ignored.
- Parameter
cols
: The number of columns in the new variables. When Cols is not Eigen::Dynamic, cols is ignored.
- Parameter
name
: All variables will share the same name, but different index.
- Returns
The MatrixDecisionVariable of size Rows x Cols, containing the new vars (not all the vars stored).
Example:
Click to expand C++ code...
MathematicalProgram prog; auto x = prog.NewContinuousVariables(2, 3, "X"); auto y = prog.NewContinuousVariables<2, 3>(2, 3, "X");
This adds a 2 x 3 matrix decision variables into the program.
The name of the variable is only used for the user in order to ease readability.
- NewEvenDegreeDsosPolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int) tuple[pydrake.symbolic.Polynomial, numpy.ndarray[object[m, n]], numpy.ndarray[object[m, n]]]
see even_degree_nonnegative_polynomial for details. Variant that produces a DSOS polynomial. Same as NewEvenDegreeSosPolynomial, except the returned polynomial is diagonally dominant sum of squares (dsos).
- NewEvenDegreeFreePolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int, coeff_name: str = 'a') pydrake.symbolic.Polynomial
Returns a free polynomial that only contains even degree monomials. A monomial is even degree if its total degree (sum of all variables’ degree) is even. For example, xy is an even degree monomial (degree 2) while x²y is not (degree 3).
- Parameter
indeterminates
: The monomial basis is over these indeterminates.
- Parameter
degree
: The highest degree of the polynomial.
- Parameter
coeff_name
: The coefficients of the polynomial are decision variables with this name as a base. The variable name would be “a1”, “a2”, etc.
- Parameter
- NewEvenDegreeNonnegativePolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int, type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial) tuple[pydrake.symbolic.Polynomial, numpy.ndarray[object[m, n]], numpy.ndarray[object[m, n]]]
See even_degree_nonnegative_polynomial for more details. Variant that produces different non-negative polynomials depending on
type
.- Parameter
type
: The returned polynomial p(x) can be either SOS, SDSOS or DSOS, depending on
type
.
- Parameter
- NewEvenDegreeSdsosPolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int) tuple[pydrake.symbolic.Polynomial, numpy.ndarray[object[m, n]], numpy.ndarray[object[m, n]]]
see even_degree_nonnegative_polynomial for details. Variant that produces an SDSOS polynomial.
- NewEvenDegreeSosPolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int) tuple[pydrake.symbolic.Polynomial, numpy.ndarray[object[m, n]], numpy.ndarray[object[m, n]]]
See even_degree_nonnegative_polynomial for more details. Variant that produces a SOS polynomial.
- NewFreePolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, deg: int, coeff_name: str = 'a') pydrake.symbolic.Polynomial
Returns a free polynomial in a monomial basis over
indeterminates
of a givendegree
. It usescoeff_name
to make new decision variables and use them as coefficients. For example,NewFreePolynomial({x₀, x₁}, 2)
returns a₀x₁² + a₁x₀x₁ + a₂x₀² + a₃x₁ + a₄x₀ + a₅.
- NewIndeterminates(*args, **kwargs)
Overloaded function.
NewIndeterminates(self: pydrake.solvers.MathematicalProgram, rows: int, name: str = ‘x’) -> numpy.ndarray[object[m, 1]]
Adds indeterminates to this MathematicalProgram, with default name “x”.
See also
NewIndeterminates(int rows, int cols, const std::vector<std::string>& names);
NewIndeterminates(self: pydrake.solvers.MathematicalProgram, rows: int, cols: int, name: str = ‘X’) -> numpy.ndarray[object[m, n]]
Adds indeterminates to this MathematicalProgram, with default name “X”. The new variables are returned and viewed as a matrix, with size
rows
xcols
.See also
NewIndeterminates(int rows, int cols, const std::vector<std::string>& names);
- NewOddDegreeFreePolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int, coeff_name: str = 'a') pydrake.symbolic.Polynomial
Returns a free polynomial that only contains odd degree monomials. A monomial is odd degree if its total degree (sum of all variables’ degree) is even. For example, xy is not an odd degree monomial (degree 2) while x²y is (degree 3).
- Parameter
indeterminates
: The monomial basis is over these indeterminates.
- Parameter
degree
: The highest degree of the polynomial.
- Parameter
coeff_name
: The coefficients of the polynomial are decision variables with this name as a base. The variable name would be “a1”, “a2”, etc.
- Parameter
- NewSosPolynomial(*args, **kwargs)
Overloaded function.
NewSosPolynomial(self: pydrake.solvers.MathematicalProgram, monomial_basis: numpy.ndarray[object[m, 1]], type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>, gram_name: str = ‘S’) -> tuple[pydrake.symbolic.Polynomial, numpy.ndarray[object[m, n]]]
Returns a pair of a SOS polynomial p = mᵀQm and the Gramian matrix Q, where m is the
monomial
basis. For example,NewSosPolynomial(Vector2<Monomial>{x,y})
returns a polynomial p = Q₍₀,₀₎x² + 2Q₍₁,₀₎xy + Q₍₁,₁₎y² and Q. Depending on the type of the polynomial, we will impose different constraint on the polynomial. - if type = kSos, we impose the polynomial being SOS. - if type = kSdsos, we impose the polynomial being SDSOS. - if type = kDsos, we impose the polynomial being DSOS.- Parameter
gram_name
: The name of the gram matrix for print out.
Note
Q is a symmetric monomial_basis.rows() x monomial_basis.rows() matrix.
NewSosPolynomial(self: pydrake.solvers.MathematicalProgram, gramian: numpy.ndarray[object[m, n], flags.f_contiguous], monomial_basis: numpy.ndarray[object[m, 1]], type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>) -> pydrake.symbolic.Polynomial
Overloads NewSosPolynomial, except the Gramian matrix Q is an input instead of an output.
NewSosPolynomial(self: pydrake.solvers.MathematicalProgram, indeterminates: pydrake.symbolic.Variables, degree: int, type: pydrake.solvers.MathematicalProgram.NonnegativePolynomial = <NonnegativePolynomial.kSos: 1>, gram_name: str = ‘S’) -> tuple[pydrake.symbolic.Polynomial, numpy.ndarray[object[m, n]]]
Overloads NewSosPolynomial. Returns a pair of a SOS polynomial p = m(x)ᵀQm(x) of degree
degree
and the Gramian matrix Q that should be PSD, where m(x) is the result of callingMonomialBasis(indeterminates, degree/2)
. For example,NewSosPolynomial({x}, 4)
returns a pair of a polynomial p = Q₍₀,₀₎x⁴ + 2Q₍₁,₀₎ x³ + (2Q₍₂,₀₎ + Q₍₁,₁₎)x² + 2Q₍₂,₁₎x + Q₍₂,₂₎ and Q.- Parameter
type
: Depending on the type of the polynomial, we will impose different constraint on the polynomial. - if type = kSos, we impose the polynomial being SOS. - if type = kSdsos, we impose the polynomial being SDSOS. - if type = kDsos, we impose the polynomial being DSOS.
- Parameter
gram_name
: The name of the gram matrix for print out.
- Raises
RuntimeError if degree is not a positive even integer. –
See also
MonomialBasis.
- NewSymmetricContinuousVariables(self: pydrake.solvers.MathematicalProgram, rows: int, name: str = 'Symmetric') numpy.ndarray[object[m, n]]
Adds a runtime sized symmetric matrix as decision variables to this MathematicalProgram. The optimization will only use the stacked columns of the lower triangular part of the symmetric matrix as decision variables.
- Parameter
rows
: The number of rows in the symmetric matrix.
- Parameter
name
: The name of the matrix. It is only used the for user to understand the optimization program. The default name is “Symmetric”, and each variable will be named as
Click to expand C++ code...
Symmetric(0, 0) Symmetric(1, 0) ... Symmetric(rows-1, 0) Symmetric(1, 0) Symmetric(1, 1) ... Symmetric(rows-1, 1) ... Symmetric(rows-1,0) Symmetric(rows-1,1) ... Symmetric(rows-1, rows-1)
Notice that the (i,j)’th entry and (j,i)’th entry has the same name.
- Returns
The newly added decision variables.
- Parameter
- class NonnegativePolynomial
Types of non-negative polynomial that can be found through conic optimization. We currently support SOS, SDSOS and DSOS. For more information about these polynomial types, please refer to “DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization” by Amir Ali Ahmadi and Anirudha Majumdar, with arXiv link https://arxiv.org/abs/1706.02586
Members:
kSos : A sum-of-squares polynomial.
kSdsos : A scaled-diagonally dominant sum-of-squares polynomial.
kDsos : A diagonally dominant sum-of-squares polynomial.
- __init__(self: pydrake.solvers.MathematicalProgram.NonnegativePolynomial, value: int) None
- kDsos = <NonnegativePolynomial.kDsos: 3>
- kSdsos = <NonnegativePolynomial.kSdsos: 2>
- kSos = <NonnegativePolynomial.kSos: 1>
- property name
- property value
- num_indeterminates(self: pydrake.solvers.MathematicalProgram) int
Gets the number of indeterminates in the optimization program
- num_vars(self: pydrake.solvers.MathematicalProgram) int
Getter for number of variables in the optimization program
- positive_semidefinite_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[PositiveSemidefiniteConstraint]]
Getter for positive semidefinite constraints.
- quadratic_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[QuadraticConstraint]]
Getter for quadratic constraints.
- quadratic_costs(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[QuadraticCost]]
Getter for quadratic costs.
- RelaxPsdConstraintToDdDualCone(self: pydrake.solvers.MathematicalProgram, constraint: pydrake.solvers.Binding[PositiveSemidefiniteConstraint]) pydrake.solvers.Binding[LinearConstraint]
1. Relaxes the positive semidefinite
constraint
with a diagonally dominant dual cone constraint. 2. Adds the diagonally dominant dual cone constraint into this MathematicalProgram. 3. Removes the positive semidefiniteconstraint
, if it had already been registered in this MathematicalProgram.This provides a polyhedral (i.e. linear) necessary, but not sufficient, condition for the variables in
constraint
to be positive semidefinite.- Precondition:
The decision variables contained in constraint have been registered with this MathematicalProgram.
- Returns
The return of AddPositiveDiagonallyDominantDualConeMatrixConstraint applied to the variables in
constraint
.
- RelaxPsdConstraintToSddDualCone(self: pydrake.solvers.MathematicalProgram, constraint: pydrake.solvers.Binding[PositiveSemidefiniteConstraint]) list[pydrake.solvers.Binding[RotatedLorentzConeConstraint]]
1. Relaxes the positive semidefinite
constraint
with a scaled diagonally dominant dual cone constraint. 2. Adds the scaled diagonally dominant dual cone constraint into this MathematicalProgram. 3. Removes the positive semidefiniteconstraint
, if it had already been registered in this MathematicalProgram.This provides a second-order cone necessary, but not sufficient, condition for the variables in
constraint
to be positive semidefinite.- Precondition:
The decision variables contained in constraint have been registered with this MathematicalProgram.
- Returns
The return of AddScaledDiagonallyDominantDualConeMatrixConstraint applied to the variables in
constraint
.
- RemoveConstraint(self: pydrake.solvers.MathematicalProgram, constraint: pydrake.solvers.Binding[Constraint]) int
Removes
constraint
from this mathematical program. See remove_cost_constraint “Remove costs, constraints or callbacks” for more details.- Returns
number of constraint objects removed from this program. If this program doesn’t contain
constraint
, then returns 0. If this program contains multipleconstraint
objects, then returns the repetition ofconstraint
in this program.
- RemoveCost(self: pydrake.solvers.MathematicalProgram, cost: pydrake.solvers.Binding[Cost]) int
Removes
cost
from this mathematical program. See remove_cost_constraint “Remove costs, constraints or callbacks” for more details.- Returns
number of cost objects removed from this program. If this program doesn’t contain
cost
, then returns 0. If this program contains multiplecost
objects, then returns the repetition ofcost
in this program.
- RemoveDecisionVariable(self: pydrake.solvers.MathematicalProgram, var: pydrake.symbolic.Variable) int
Remove
var
from this program’s decision variable.Note
after removing the variable, the indices of some remaining variables inside this MathematicalProgram will change.
- Returns
the index of
var
in this optimization program. return -1 ifvar
is not a decision variable.- Raises
exception if var is bound with any cost or constraint. –
exception if var is not a decision variable of the program. –
- RemoveVisualizationCallback(self: pydrake.solvers.MathematicalProgram, callback: pydrake.solvers.Binding[VisualizationCallback]) int
Removes
callback
from this mathematical program. See remove_cost_constraint “Remove costs, constraints or callbacks” for more details.- Returns
number of callback objects removed from this program. If this program doesn’t contain
callback
, then returns 0. If this program contains multiplecallback
objects, then returns the repetition ofcallback
in this program.
- Reparse(self: pydrake.solvers.MathematicalProgram, p: pydrake.symbolic.Polynomial) None
Reparses the polynomial
p
using this MathematicalProgram’s indeterminates.
- required_capabilities(self: pydrake.solvers.MathematicalProgram) set[pydrake.solvers.ProgramAttribute]
Getter for the required capability on the solver, given the cost/constraint/variable types in the program.
- rotated_lorentz_cone_constraints(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[RotatedLorentzConeConstraint]]
Getter for rotated Lorentz cone constraints.
- SetDecisionVariableValueInVector(*args, **kwargs)
Overloaded function.
SetDecisionVariableValueInVector(self: pydrake.solvers.MathematicalProgram, decision_variable: pydrake.symbolic.Variable, decision_variable_new_value: float, values: Optional[numpy.ndarray[numpy.float64[m, 1], flags.writeable]]) -> None
Updates the value of a single
decision_variable
inside thevalues
vector to bedecision_variable_new_value
. The other decision variables’ values invalues
are unchanged.- Parameter
decision_variable
: a registered decision variable in this program.
- Parameter
decision_variable_new_value
: the variable’s new values.
- Parameter
values
: The vector to be tweaked; must be of size num_vars().
SetDecisionVariableValueInVector(self: pydrake.solvers.MathematicalProgram, decision_variables: numpy.ndarray[object[m, n], flags.f_contiguous], decision_variables_new_values: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], values: Optional[numpy.ndarray[numpy.float64[m, 1], flags.writeable]]) -> None
Updates the values of some
decision_variables
inside thevalues
vector to bedecision_variables_new_values
. The other decision variables’ values invalues
are unchanged.- Parameter
decision_variables
: registered decision variables in this program.
- Parameter
decision_variables_new_values
: the variables’ respective new values; must have the same rows() and cols() sizes and
decision_variables
.- Parameter
values
: The vector to be tweaked; must be of size num_vars().
- SetInitialGuess(*args, **kwargs)
Overloaded function.
SetInitialGuess(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.symbolic.Variable, arg1: float) -> None
Sets the initial guess for a single variable
decision_variable
. The guess is stored as part of this program.- Precondition:
decision_variable is a registered decision variable in the program.
- Raises
RuntimeError if precondition is not satisfied. –
SetInitialGuess(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[object[m, n]], arg1: numpy.ndarray[numpy.float64[m, n]]) -> None
Sets the initial guess for the decision variables stored in
decision_variable_mat
to bex0
. The guess is stored as part of this program.
- SetInitialGuessForAllVariables(self: pydrake.solvers.MathematicalProgram, arg0: numpy.ndarray[numpy.float64[m, 1]]) None
Set the initial guess for ALL decision variables. Note that variables begin with a default initial guess of NaN to indicate that no guess is available.
- Parameter
x0
: A vector of appropriate size (num_vars() x 1).
- Parameter
- SetSolverOption(*args, **kwargs)
Overloaded function.
SetSolverOption(self: pydrake.solvers.MathematicalProgram, solver_id: pydrake.solvers.SolverId, solver_option: str, option_value: float) -> None
See set_solver_option for more details. Set the double-valued options.
SetSolverOption(self: pydrake.solvers.MathematicalProgram, solver_id: pydrake.solvers.SolverId, solver_option: str, option_value: int) -> None
See set_solver_option for more details. Set the integer-valued options.
SetSolverOption(self: pydrake.solvers.MathematicalProgram, solver_id: pydrake.solvers.SolverId, solver_option: str, option_value: str) -> None
See set_solver_option for more details. Set the string-valued options.
SetSolverOption(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.solvers.SolverType, arg1: str, arg2: float) -> None
See set_solver_option for more details. Set the double-valued options.
SetSolverOption(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.solvers.SolverType, arg1: str, arg2: int) -> None
See set_solver_option for more details. Set the integer-valued options.
SetSolverOption(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.solvers.SolverType, arg1: str, arg2: str) -> None
See set_solver_option for more details. Set the string-valued options.
- SetSolverOptions(self: pydrake.solvers.MathematicalProgram, arg0: pydrake.solvers.SolverOptions) None
Overwrite the stored solver options inside MathematicalProgram with the provided solver options.
- SetVariableScaling(self: pydrake.solvers.MathematicalProgram, var: pydrake.symbolic.Variable, s: float) None
Setter for the scaling
s
of decision variablevar
.- Parameter
var
: the decision variable to be scaled.
- Parameter
s
: scaling factor (must be positive).
See variable_scaling “Variable scaling” for more information.
- Parameter
- TightenPsdConstraintToDd(self: pydrake.solvers.MathematicalProgram, constraint: pydrake.solvers.Binding[PositiveSemidefiniteConstraint]) numpy.ndarray[object[m, n]]
1. Tightens the positive semidefinite
constraint
with a positive diagonally dominant constraint. 2. Adds the positive diagonally dominant constraint into this MathematicalProgram. 3. Removes the positive semidefiniteconstraint
, if it had already been registered in this MathematicalProgram.This provides a polyhedral (i.e. linear) sufficient, but not necessary, condition for the variables in
constraint
to be positive semidefinite.- Precondition:
The decision variables contained in constraint have been registered with this MathematicalProgram.
- Returns
The return of AddPositiveDiagonallyDominantMatrixConstraint applied to the variables in
constraint
.
- TightenPsdConstraintToSdd(self: pydrake.solvers.MathematicalProgram, constraint: pydrake.solvers.Binding[PositiveSemidefiniteConstraint]) list[list[numpy.ndarray[object[2, 2]]]]
1. Tightens the positive semidefinite
constraint
with a scaled diagonally dominant constraint. 2. Adds the scaled diagonally dominant constraint into this MathematicalProgram. 3. Removes the positive semidefiniteconstraint
, if it had already been registered in this MathematicalProgram.This provides a second-order cone sufficient, but not necessary, condition for the variables in
constraint
to be positive semidefinite.- Precondition:
The decision variables contained in constraint have been registered with this MathematicalProgram.
- Returns
The return of AddScaledDiagonallyDominantMatrixConstraint applied to the variables in
constraint
.
- ToLatex(self: pydrake.solvers.MathematicalProgram, precision: int = 3) str
Returns a string representation of this program in LaTeX.
This can be particularly useful e.g. in a Jupyter (python) notebook:
Click to expand C++ code...
from IPython.display import Markdown, display display(Markdown(prog.ToLatex()))
Note that by default, we do not require variables to have unique names. Providing useful variable names and calling Evaluator::set_description() to describe the costs and constraints can dramatically improve the readability of the output. See the tutorial
debug_mathematical_program.ipynb
for more information.
- visualization_callbacks(self: pydrake.solvers.MathematicalProgram) list[pydrake.solvers.Binding[VisualizationCallback]]
Getter for all callbacks.
- class pydrake.solvers.MathematicalProgramResult
The result returned by MathematicalProgram::Solve(). It stores the solvers::SolutionResult (whether the program is solved to optimality, detected infeasibility, etc), the optimal value for the decision variables, the optimal cost, and solver specific details.
- __init__(self: pydrake.solvers.MathematicalProgramResult) None
Constructs the result.
Note
The solver_details is set to nullptr.
- EvalBinding(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.solvers.Binding[EvaluatorBase]) numpy.ndarray[numpy.float64[m, 1]]
Evaluate a Binding at the solution.
- Parameter
binding
: A binding between a constraint/cost and the variables.
- Precondition:
The binding.variables() must be the within the decision variables in the MathematicalProgram that generated this MathematicalProgramResult.
- Precondition:
The user must have called set_decision_variable_index() function.
- Parameter
- get_optimal_cost(self: pydrake.solvers.MathematicalProgramResult) float
Gets the optimal cost.
- get_solution_result(self: pydrake.solvers.MathematicalProgramResult) pydrake.solvers.SolutionResult
Gets SolutionResult.
- get_solver_details(self: pydrake.solvers.MathematicalProgramResult) object
Gets the solver details for the
Solver
that solved the program. Throws an error if the solver_details has not been set.
- get_solver_id(self: pydrake.solvers.MathematicalProgramResult) pydrake.solvers.SolverId
Gets the solver ID.
- get_suboptimal_objective(self: pydrake.solvers.MathematicalProgramResult, solution_number: int) float
Gets the suboptimal objective value. See solution_pools “solution pools”.
- Parameter
solution_number
: The index of the sub-optimal solution.
- Precondition:
solution_number
should be in the range [0, num_suboptimal_solution()).
- Parameter
- get_x_val(self: pydrake.solvers.MathematicalProgramResult) numpy.ndarray[numpy.float64[m, 1]]
Gets the decision variable values.
- GetDualSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.solvers.Binding[EvaluatorBase]) numpy.ndarray[numpy.float64[m, 1]]
Gets the dual solution associated with a constraint.
For constraints in the form lower <= f(x) <= upper (including linear inequality, linear equality, bounding box constraints, and general nonlinear constraints), we interpret the dual variable value as the “shadow price” of the original problem. Namely if we change the constraint bound by one unit (each unit is infinitesimally small), the change of the optimal cost is the value of the dual solution times the unit. Mathematically dual_solution = ∂optimal_cost / ∂bound.
For a linear equality constraint Ax = b where b ∈ ℝⁿ, the vector of dual variables has n rows, and dual_solution(i) is the value of the dual variable for the constraint A(i,:)*x = b(i).
For a linear inequality constraint lower <= A*x <= upper where lower and upper ∈ ℝⁿ, dual_solution also has n rows. dual_solution(i) is the value of the dual variable for constraint lower(i) <= A(i,:)*x <= upper(i). If neither side of the constraint is active, then dual_solution(i) is 0. If the left hand-side lower(i) <= A(i, :)*x is active (meaning lower(i) = A(i, :)*x at the solution), then dual_solution(i) is non-negative (because the objective is to minimize a cost, increasing the lower bound means the constraint set is tighter, hence the optimal solution cannot decrease. Thus the shadow price is non-negative). If the right hand-side A(i, :)*x<=upper(i) is active (meaning A(i,:)*x=upper(i) at the solution), then dual_solution(i) is non-positive.
For a bounding box constraint lower <= x <= upper, the interpretation of the dual solution is the same as the linear inequality constraint.
For a Lorentz cone or rotated Lorentz cone constraint that Ax + b is in the cone, depending on the solver, the dual solution has different meanings: 1. If the solver is Gurobi, then the user can only obtain the dual solution by explicitly setting the options for computing dual solution.
Click to expand C++ code...
auto constraint = prog.AddLorentzConeConstraint(...); GurobiSolver solver; // Explicitly tell the solver to compute the dual solution for Lorentz // cone or rotated Lorentz cone constraint, check // https://www.gurobi.com/documentation/10.1/refman/qcpdual.html for // more information. SolverOptions options; options.SetOption(GurobiSolver::id(), "QCPDual", 1); MathematicalProgramResult result = solver.Solve(prog, {}, options); Eigen::VectorXd dual_solution = result.GetDualSolution(constraint);
The dual solution has size 1, dual_solution(0) is the shadow price for the constraint z₁² + … +zₙ² ≤ z₀² for Lorentz cone constraint, and the shadow price for the constraint z₂² + … +zₙ² ≤ z₀z₁ for rotated Lorentz cone constraint, where z is the slack variable representing z = A*x+b and z in the Lorentz cone/rotated Lorentz cone. 2. For nonlinear solvers like IPOPT, the dual solution for Lorentz cone constraint (with EvalType::kConvex) is the shadow price for z₀ - sqrt(z₁² + … +zₙ²) ≥ 0, where z = Ax+b. 3. For other convex conic solver such as SCS, MOSEK™, CSDP, etc, the dual solution to the (rotated) Lorentz cone constraint doesn’t have the “shadow price” interpretation, but should lie in the dual cone, and satisfy the KKT condition. For more information, refer to https://docs.mosek.com/10.1/capi/prob-def-conic.html#duality-for-conic-optimization as an explanation.
The interpretation for the dual variable to conic constraint x ∈ K can be different. Here K is a convex cone, including exponential cone, power cone, psd cone, etc. When the problem is solved by a convex solver (like SCS, MOSEK™, CSDP, etc), often it has a dual variable z ∈ K*, where K* is the dual cone. Here the dual variable DOESN’T have the interpretation of “shadow price”, but should satisfy the KKT condition, while the dual variable stays inside the dual cone.
When K is a psd cone, the returned dual solution is the lower triangle of the dual symmetric psd matrix. Namely for the primal problem
min trace(C*X) s.t A(X) = b X is psd
the dual is
max b’*y s.t A’(y) - C = Z Z is psd.
We return the lower triangular part of Z. You can call drake::math::ToSymmetricMatrixFromLowerTriangularColumns to get the matrix Z.
- GetInfeasibleConstraintNames(self: pydrake.solvers.MathematicalProgramResult, prog: pydrake.solvers.MathematicalProgram, tol: Optional[float] = None) list[str]
See get_infeasible_constraints for more information.
- Parameter
prog
: The MathematicalProgram that was solved to obtain
this
MathematicalProgramResult.- Parameter
tolerance
: A positive tolerance to check the constraint violation. If no tolerance is provided, this method will attempt to obtain the constraint tolerance from the solver, or insert a conservative default tolerance.
Note: Currently most constraints have the empty string as the description, so the NiceTypeName of the Constraint is used instead. Use e.g.
prog.AddConstraint(x == 1).evaluator().set_description(str)
to make this method more specific/useful.- Parameter
- GetInfeasibleConstraints(self: pydrake.solvers.MathematicalProgramResult, prog: pydrake.solvers.MathematicalProgram, tol: Optional[float] = None) list[pydrake.solvers.Binding[Constraint]]
See get_infeasible_constraints for more information.
- Parameter
prog
: The MathematicalProgram that was solved to obtain
this
MathematicalProgramResult.- Parameter
tolerance
: A positive tolerance to check the constraint violation. If no tolerance is provided, this method will attempt to obtain the constraint tolerance from the solver, or insert a conservative default tolerance.
- Returns
infeasible_bindings A vector of all infeasible bindings (constraints together with the associated variables) at the best-effort solution.
- Parameter
- GetSolution(*args, **kwargs)
Overloaded function.
GetSolution(self: pydrake.solvers.MathematicalProgramResult) -> numpy.ndarray[numpy.float64[m, 1]]
Gets the solution of all decision variables.
GetSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.symbolic.Variable) -> float
Gets the solution of a single decision variable.
- Parameter
var
: The decision variable.
- Returns
The value of the decision variable after solving the problem.
- Raises
RuntimeError if var is not captured in the mapping –
decision_variable_index`, as the input argument o –
set_decision_variable_index() –
GetSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: numpy.ndarray[object[m, 1]]) -> numpy.ndarray[numpy.float64[m, 1]]
Gets the solution of an Eigen matrix of decision variables.
- Template parameter
Derived
: An Eigen matrix containing Variable.
- Parameter
var
: The decision variables.
- Returns
The value of the decision variable after solving the problem.
GetSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: numpy.ndarray[object[m, n]]) -> numpy.ndarray[numpy.float64[m, n]]
Gets the solution of an Eigen matrix of decision variables.
- Template parameter
Derived
: An Eigen matrix containing Variable.
- Parameter
var
: The decision variables.
- Returns
The value of the decision variable after solving the problem.
GetSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.symbolic.Expression) -> pydrake.symbolic.Expression
Substitutes the value of all decision variables into the Expression.
- Parameter
e
: The decision variable.
- Returns
the Expression that is the result of the substitution.
GetSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.symbolic.Polynomial) -> pydrake.symbolic.Polynomial
Substitutes the value of all decision variables into the coefficients of the symbolic polynomial.
- Parameter
p
: A symbolic polynomial. Its indeterminates can’t intersect with the set of decision variables of the MathematicalProgram from which this result is obtained.
- Returns
the symbolic::Polynomial as the result of the substitution.
GetSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: numpy.ndarray[object[m, n]]) -> numpy.ndarray[object[m, n]]
- GetSuboptimalSolution(*args, **kwargs)
Overloaded function.
GetSuboptimalSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.symbolic.Variable, arg1: int) -> float
Gets the suboptimal solution of a decision variable. See solution_pools “solution pools”
- Parameter
var
: The decision variable.
- Parameter
solution_number
: The index of the sub-optimal solution.
- Precondition:
solution_number
should be in the range [0, num_suboptimal_solution()).
- Returns
The suboptimal value of the decision variable after solving the problem.
GetSuboptimalSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: numpy.ndarray[object[m, 1]], arg1: int) -> numpy.ndarray[numpy.float64[m, 1]]
@name Solution Pools Some solvers (like Gurobi, Cplex, etc) can store a pool of (suboptimal) solutions for mixed integer programming model. Gets the suboptimal solution corresponding to a matrix of decision variables. See solution_pools “solution pools”
- Parameter
var
: The decision variables.
- Parameter
solution_number
: The index of the sub-optimal solution.
- Precondition:
solution_number
should be in the range [0, num_suboptimal_solution()).
- Returns
The suboptimal values of the decision variables after solving the problem.
GetSuboptimalSolution(self: pydrake.solvers.MathematicalProgramResult, arg0: numpy.ndarray[object[m, n]], arg1: int) -> numpy.ndarray[numpy.float64[m, n]]
@name Solution Pools Some solvers (like Gurobi, Cplex, etc) can store a pool of (suboptimal) solutions for mixed integer programming model. Gets the suboptimal solution corresponding to a matrix of decision variables. See solution_pools “solution pools”
- Parameter
var
: The decision variables.
- Parameter
solution_number
: The index of the sub-optimal solution.
- Precondition:
solution_number
should be in the range [0, num_suboptimal_solution()).
- Returns
The suboptimal values of the decision variables after solving the problem.
- is_success(self: pydrake.solvers.MathematicalProgramResult) bool
Returns true if the optimization problem is solved successfully; false otherwise. For more information on the solution status, the user could call get_solver_details() to obtain the solver-specific solution status.
- num_suboptimal_solution(self: pydrake.solvers.MathematicalProgramResult) int
Number of suboptimal solutions stored inside MathematicalProgramResult. See solution_pools “solution pools”.
- set_solution_result(self: pydrake.solvers.MathematicalProgramResult, arg0: pydrake.solvers.SolutionResult) None
Sets SolutionResult.
- set_x_val(self: pydrake.solvers.MathematicalProgramResult, x_val: numpy.ndarray[numpy.float64[m, 1]]) None
Sets the decision variable values.
- SetSolution(self: pydrake.solvers.MathematicalProgramResult, var: pydrake.symbolic.Variable, value: float) None
Resets the solution of a single decision variable that is already registered with this result.
- Raises
RuntimeError if var is not captured in the mapping –
decision_variable_index`, as the input argument o –
set_decision_variable_index() –
- class pydrake.solvers.MinimumValueLowerBoundConstraint
Bases:
pydrake.solvers.Constraint
Constrain min(v) >= lb where v=f(x). Namely all elements of the vector
v
returned by the user-provided function f(x) to be no smaller than a specified valuelb
.The formulation of the constraint is
Click to expand C++ code...
SmoothOverMax( φ((vᵢ - v_influence)/(v_influence - lb)) / φ(-1) ) ≤ 1
where vᵢ is the i-th value returned by the user-provided function,
lb
is the minimum allowable value. v_influence is the “influence value” (the value below which an element influences the constraint or, conversely, the value above which an element is ignored), φ is a solvers::MinimumValuePenaltyFunction, and SmoothOverMax(v) is a smooth, over approximation of max(v) (i.e. SmoothOverMax(v) >= max(v), for all v). We require that lb < v_influence. The input scaling (vᵢ - v_influence)/(v_influence - lb) ensures that at the boundary of the feasible set (when vᵢ == lb), we evaluate the penalty function at -1, where it is required to have a non-zero gradient. The user-provided function may return a vector with up tomax_num_values
elements. If it returns a vector with fewer thanmax_num_values
elements, the remaining elements are assumed to be greater than the “influence value”.- __init__(self: pydrake.solvers.MinimumValueLowerBoundConstraint, num_vars: int, minimum_value_lower: float, influence_value_offset: float, max_num_values: int, value_function: Callable[[numpy.ndarray[object[m, 1]], float], numpy.ndarray[object[m, 1]]], value_function_double: Callable[[numpy.ndarray[numpy.float64[m, 1]], float], numpy.ndarray[numpy.float64[m, 1]]] = None) None
Constructs a MinimumValueLowerBoundConstraint. min(v) >= lb And we set ub to infinity in min(v) <= ub.
- Parameter
num_vars
: The number of inputs to
value_function
- Parameter
minimum_value
: The minimum allowed value, lb, for all elements of the vector returned by
value_function
.- Parameter
influence_value_offset
: The difference between the influence value, v_influence, and the minimum value, lb (see class documentation). This value must be finite and strictly positive, as it is used to scale the values returned by
value_function
. Smaller values may decrease the amount of computation required for each constraint evaluation ifvalue_function
can quickly determine that some elements will be larger than the influence value and skip the computation associated with those elements.- Parameter
max_num_values
: The maximum number of elements in the vector returned by
value_function
.- Parameter
value_function
: User-provided function that takes a
num_vars
-element vector and the influence distance as inputs and returns a vector with up tomax_num_values
elements. The function can omit from the return vector any elements larger than the provided influence distance.- Parameter
value_function_double
: Optional user-provide function that computes the same values as
value_function
but for double rather than AutoDiffXd. If omitted,value_function
will be called (and the gradients discarded) when this constraint is evaluated for doubles.- Precondition:
value_function_double(ExtractValue(x), v_influence) == ExtractValue(value_function(x, v_influence))
for all x.- Precondition:
value_function(x).size() <= max_num_values
for all x.
- Raises
RuntimeError if influence_value_offset = ∞. –
RuntimeError if influence_value_offset ≤ 0. –
- Parameter
- influence_value(self: pydrake.solvers.MinimumValueLowerBoundConstraint) float
Getter for the influence value.
- minimum_value_lower(self: pydrake.solvers.MinimumValueLowerBoundConstraint) float
Getter for the lower bound on the minimum value.
- set_penalty_function(self: pydrake.solvers.MinimumValueLowerBoundConstraint, new_penalty_function: Callable[[float, bool], tuple]) None
Setter for the penalty function. The penalty function new_penalty_function(x: float, compute_grad: bool) -> tuple[float, Optional[float]] returns [penalty_value, penalty_gradient] when compute_grad=True, or [penalty_value, None] when compute_grad=False. See minimum_value_constraint.h on the requirement on MinimumValuePenaltyFunction.
- class pydrake.solvers.MinimumValueUpperBoundConstraint
Bases:
pydrake.solvers.Constraint
Constrain min(v) <= ub where v=f(x). Namely at least one element of the vector
v
returned by the user-provided function f(x) to be no larger than a specified valueub
.The formulation of the constraint is
Click to expand C++ code...
SmoothUnderMax( φ((vᵢ - v_influence)/(v_influence - ub)) / φ(-1) ) ≥ 1
where vᵢ is the i-th value returned by the user-provided function,
ub
is the upper bound for the min(v). (Note thatub
is NOT the upper bound ofv
). v_influence is the “influence value” (the value below which an element influences the constraint or, conversely, the value above which an element is ignored), φ is a solvers::MinimumValuePenaltyFunction. SmoothUnderMax(x) is a smooth, under approximation of max(v) (i.e. SmoothUnderMax(v) <= max(v) for all v). We require that ub < v_influence. The input scaling (vᵢ - v_influence)/(v_influence - ub) ensures that at the boundary of the feasible set (when vᵢ == ub), we evaluate the penalty function at -1, where it is required to have a non-zero gradient. The user-provided function may return a vector with up tomax_num_values
elements. If it returns a vector with fewer thanmax_num_values
elements, the remaining elements are assumed to be greater than the “influence value”.- __init__(self: pydrake.solvers.MinimumValueUpperBoundConstraint, num_vars: int, minimum_value_upper: float, influence_value_offset: float, max_num_values: int, value_function: Callable[[numpy.ndarray[object[m, 1]], float], numpy.ndarray[object[m, 1]]], value_function_double: Callable[[numpy.ndarray[numpy.float64[m, 1]], float], numpy.ndarray[numpy.float64[m, 1]]] = None) None
Constructs a MinimumValueUpperBoundConstraint. min(v) <= ub
- Parameter
num_vars
: The number of inputs to
value_function
- Parameter
minimum_value_upper
: The upper bound on the minimum allowed value for all elements of the vector returned by
value_function
, namely min(value_function(x)) <= minimum_value_upper- Parameter
influence_value_offset
: The difference between the influence value, v_influence, and minimum_value_upper. This value must be finite and strictly positive, as it is used to scale the values returned by
value_function
. Larger values may increase the possibility of finding a solution to the constraint. With a small v_influence, the value_function will ignore the entries with value less than v_influence. While it is possible that by changing x, that value (that currently been ignored) can decrease to below ub with a different x, by using a small v_influence, the gradient of that entry is never considered if the entry is ignored. We strongly suggest using a largerv_influence
compared to the one used in MinimumValueConstraint when constraining min(v) >= lb.- Parameter
max_num_values
: The maximum number of elements in the vector returned by
value_function
.- Parameter
value_function
: User-provided function that takes a
num_vars
-element vector and the influence distance as inputs and returns a vector with up tomax_num_values
elements. The function can omit from the return vector any elements larger than the provided influence distance.- Parameter
value_function_double
: Optional user-provide function that computes the same values as
value_function
but for double rather than AutoDiffXd. If omitted,value_function
will be called (and the gradients discarded) when this constraint is evaluated for doubles.- Precondition:
value_function_double(ExtractValue(x), v_influence) == ExtractValue(value_function(x, v_influence))
for all x.- Precondition:
value_function(x).size() <= max_num_values
for all x.
- Raises
RuntimeError if influence_value_offset = ∞. –
RuntimeError if influence_value_offset ≤ 0. –
- Parameter
- influence_value(self: pydrake.solvers.MinimumValueUpperBoundConstraint) float
Getter for the influence value.
- minimum_value_upper(self: pydrake.solvers.MinimumValueUpperBoundConstraint) float
Getter for the upper bound on the minimum value.
- set_penalty_function(self: pydrake.solvers.MinimumValueUpperBoundConstraint, new_penalty_function: Callable[[float, bool], tuple]) None
Setter for the penalty function. The penalty function new_penalty_function(x: float, compute_grad: bool) -> tuple[float, Optional[float]] returns [penalty_value, penalty_gradient] when compute_grad=True, or [penalty_value, None] when compute_grad=False. See minimum_value_constraint.h on the requirement on MinimumValuePenaltyFunction.
- class pydrake.solvers.MixedIntegerBranchAndBound
Given a mixed-integer optimization problem (MIP) (or more accurately, mixed binary problem), solve this problem through branch-and-bound process. We will first replace all the binary variables with continuous variables, and relax the integral constraint on the binary variables z ∈ {0, 1} with continuous constraints 0 ≤ z ≤ 1. In the subsequent steps, at each node of the tree, we will fix some binary variables to either 0 or 1, and solve the rest of the variables. Notice that we will create a new set of variables in the branch-and-bound process, since we need to replace the binary variables with continuous variables.
- __init__(self: pydrake.solvers.MixedIntegerBranchAndBound, prog: pydrake.solvers.MathematicalProgram, solver_id: pydrake.solvers.SolverId, options: pydrake.solvers.MixedIntegerBranchAndBound.Options = Options(max_explored_nodes=- 1)) None
Construct a branch-and-bound tree from a mixed-integer optimization program.
- Parameter
prog
: A mixed-integer optimization program.
- Parameter
solver_id
: The ID of the solver for the optimization.
- Parameter
- GetOptimalCost(self: pydrake.solvers.MixedIntegerBranchAndBound) float
Get the optimal cost.
- GetSolution(*args, **kwargs)
Overloaded function.
GetSolution(self: pydrake.solvers.MixedIntegerBranchAndBound, mip_var: pydrake.symbolic.Variable, nth_best_solution: int = 0) -> float
Get the n’th best integral solution for a variable. The best solutions are sorted in the ascending order based on their costs. Each solution is found in a separate node in the branch-and-bound tree, so the values of the binary variables are different in each solution.
- Parameter
mip_var
: A variable in the original MIP.
- Parameter
nth_best_solution
: . The index of the best integral solution.
- Precondition:
mip_var
is a variable in the original MIP.- Precondition:
nth_best_solution
is between 0 and solutions().size().
- Raises
RuntimeError if the preconditions are not satisfied. –
GetSolution(self: pydrake.solvers.MixedIntegerBranchAndBound, mip_vars: numpy.ndarray[object[m, 1]], nth_best_solution: int = 0) -> numpy.ndarray[numpy.float64[m, 1]]
Get the n’th best integral solution for some variables. The best solutions are sorted in the ascending order based on their costs. Each solution is found in a separate node in the branch-and-bound tree, so
- Parameter
mip_vars
: Variables in the original MIP.
- Parameter
nth_best_solution
: . The index of the best integral solution.
- Precondition:
mip_vars
are variables in the original MIP.- Precondition:
nth_best_solution
is between 0 and solutions().size().
- Raises
RuntimeError if the preconditions are not satisfied. –
GetSolution(self: pydrake.solvers.MixedIntegerBranchAndBound, mip_vars: numpy.ndarray[object[m, n]], nth_best_solution: int = 0) -> numpy.ndarray[numpy.float64[m, n]]
Get the n’th best integral solution for some variables. The best solutions are sorted in the ascending order based on their costs. Each solution is found in a separate node in the branch-and-bound tree, so
- Parameter
mip_vars
: Variables in the original MIP.
- Parameter
nth_best_solution
: . The index of the best integral solution.
- Precondition:
mip_vars
are variables in the original MIP.- Precondition:
nth_best_solution
is between 0 and solutions().size().
- Raises
RuntimeError if the preconditions are not satisfied. –
- GetSubOptimalCost(self: pydrake.solvers.MixedIntegerBranchAndBound, nth_suboptimal_cost: int) float
Get the n’th sub-optimal cost. The costs are sorted in the ascending order. The sub-optimal costs do not include the optimal cost.
- Parameter
nth_suboptimal_cost
: The n’th sub-optimal cost.
- Precondition:
nth_suboptimal_cost
is between 0 and solutions().size() - 1.
- Raises
RuntimeError if the precondition is not satisfied. –
- Parameter
- class Options
Configuration settings for the MixedIntegerBranchAndBound constructor.
- __init__(self: pydrake.solvers.MixedIntegerBranchAndBound.Options, **kwargs) None
- property max_explored_nodes
The maximal number of explored nodes in the tree. The branch and bound process will terminate if the tree has explored this number of nodes. max_explored_nodes <= 0 means that we don’t put an upper bound on the number of explored nodes.
- Solve(self: pydrake.solvers.MixedIntegerBranchAndBound) pydrake.solvers.SolutionResult
Solve the mixed-integer problem (MIP) through a branch and bound process.
- Returns
solution_result
: If solution_result=SolutionResult::kSolutionFound, then the best solutions are stored inside solutions(). The user can access the value of each variable(s) through GetSolution(…). If solution_result=SolutionResult::kInfeasibleConstraints, then the mixed-integer problem is primal infeasible. If solution_result=SolutionResult::kUnbounded, then the mixed-integer problem is primal unbounded.
- Returns
- class pydrake.solvers.MixedIntegerRotationConstraintGenerator
We relax the non-convex SO(3) constraint on rotation matrix R to mixed-integer linear constraints. The formulation of these constraints are described in Global Inverse Kinematics via Mixed-integer Convex Optimization by Hongkai Dai, Gregory Izatt and Russ Tedrake, ISRR, 2017
The SO(3) constraint on a rotation matrix R = [r₁, r₂, r₃], rᵢ∈ℝ³ is
Click to expand C++ code...
rᵢᵀrᵢ = 1 (1) rᵢᵀrⱼ = 0 (2) r₁ x r₂ = r₃ (3)
To relax SO(3) constraint on rotation matrix R, we divide the range [-1, 1] (the range of each entry in R) into smaller intervals [φ(i), φ(i+1)], and then relax the SO(3) constraint within each interval. We provide 3 approaches for relaxation 1. By replacing each bilinear product in constraint (1), (2) and (3) with a new variable, in the McCormick envelope of the bilinear product w = x * y. 2. By considering the intersection region between axis-aligned boxes, and the surface of a unit sphere in 3D. 3. By combining the two approaches above. This will result in a tighter relaxation.
These three approaches give different relaxation of SO(3) constraint (the feasible sets for each relaxation are different), and different computation speed. The users can switch between the approaches to find the best fit for their problem.
Note
If you have several rotation matrices that all need to be relaxed through mixed-integer constraint, then you can create a single MixedIntegerRotationConstraintGenerator object, and add the mixed-integer constraint to each rotation matrix, by calling AddToProgram() function repeatedly.
- __init__(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator, approach: pydrake.solvers.MixedIntegerRotationConstraintGenerator.Approach, num_intervals_per_half_axis: int, interval_binning: pydrake.solvers.IntervalBinning) None
Constructor
- Parameter
approach
: Refer to MixedIntegerRotationConstraintGenerator::Approach for the details.
- Parameter
num_intervals_per_half_axis
: We will cut the range [-1, 1] evenly to 2 *
num_intervals_per_half_axis
small intervals. The number of binary variables will depend on the number of intervals.- Parameter
interval_binning
: The binning scheme we use to add SOS2 constraint with binary variables. If interval_binning = kLinear, then we will add 9 * 2 *
num_intervals_per_half_axis binary
variables; if interval_binning = kLogarithmic, then we will add 9 * (1 + log₂(num_intervals_per_half_axis)) binary variables. Refer to AddLogarithmicSos2Constraint and AddSos2Constraint for more details.
- Parameter
- AddToProgram(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator, R: numpy.ndarray[object[3, 3], flags.f_contiguous], prog: pydrake.solvers.MathematicalProgram) pydrake.solvers.MixedIntegerRotationConstraintGenerator.ReturnType
Add the mixed-integer linear constraints to the optimization program, as a relaxation of SO(3) constraint on the rotation matrix
R
.- Parameter
R
: The rotation matrix on which the SO(3) constraint is imposed.
- Parameter
prog
: The optimization program to which the mixed-integer constraints (and additional variables) are added.
- Parameter
- class Approach
Members:
kBoxSphereIntersection : Relax SO(3) constraint by considering the intersection between boxes
and the unit sphere surface.
kBilinearMcCormick : Relax SO(3) constraint by considering the McCormick envelope on the
bilinear product.
kBoth : Relax SO(3) constraint by considering both the intersection between
boxes and the unit sphere surface, and the McCormick envelope on the bilinear product.
- __init__(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator.Approach, value: int) None
- kBilinearMcCormick = <Approach.kBilinearMcCormick: 1>
- kBoth = <Approach.kBoth: 2>
- kBoxSphereIntersection = <Approach.kBoxSphereIntersection: 0>
- property name
- property value
- interval_binning(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator) pydrake.solvers.IntervalBinning
- num_intervals_per_half_axis(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator) int
- phi(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator) numpy.ndarray[numpy.float64[m, 1]]
Getter for φ.
- phi_nonnegative(self: pydrake.solvers.MixedIntegerRotationConstraintGenerator) numpy.ndarray[numpy.float64[m, 1]]
Getter for φ₊, the non-negative part of φ.
- class ReturnType
- __init__(*args, **kwargs)
- property B_
B_ contains the new binary variables added to the program. B_`i][j] represents in which interval R(i, j) lies. If we use linear binning, then B_[i][j] is of length 2 * num_intervals_per_half_axis_. B_[i][j <k>`_ = 1 => φ(k) ≤ R(i, j) ≤ φ(k + 1) B_`i][j <k>`_ = 0 => R(i, j) ≥ φ(k + 1) or R(i, j) ≤ φ(k) If we use logarithmic binning, then B_[i][j] is of length 1 + log₂(num_intervals_per_half_axis_). If B_[i][j] represents integer k in reflected Gray code, then R(i, j) is in the interval [φ(k), φ(k+1)].
- property lambda_
λ contains part of the new continuous variables added to the program. λ_[i][j] is of length 2 * num_intervals_per_half_axis_ + 1, such that R(i, j) = φᵀ * λ_[i][j]. Notice that λ_[i][j] satisfies the special ordered set of type 2 (SOS2) constraint. Namely at most two entries in λ_[i][j] can be strictly positive, and these two entries have to be consecutive. Mathematically
Click to expand C++ code...
∑ₖ λ_[i][j](k) = 1 λ_[i][j](k) ≥ 0 ∀ k ∃ m s.t λ_[i][j](n) = 0 if n ≠ m and n ≠ m+1
- class pydrake.solvers.MobyLCPSolver
Bases:
pydrake.solvers.SolverInterface
A class for solving Linear Complementarity Problems (LCPs). Solving a LCP requires finding a solution to the problem:
Click to expand C++ code...
Mz + q = w z ≥ 0 w ≥ 0 zᵀw = 0
(where M ∈ ℝⁿˣⁿ and q ∈ ℝⁿ are problem inputs and z ∈ ℝⁿ and w ∈ ℝⁿ are unknown vectors) or correctly reporting that such a solution does not exist. In spite of their linear structure, solving LCPs is NP-Hard [Cottle 1992]. However, some LCPs are significantly easier to solve. For instance, it can be seen that the LCP is solvable in worst-case polynomial time for the case of symmetric positive-semi-definite M by formulating it as the following convex quadratic program:
Click to expand C++ code...
minimize: f(z) = zᵀw = zᵀ(Mz + q) subject to: z ≥ 0 Mz + q ≥ 0
Note that this quadratic program’s (QP) objective function at the minimum z cannot be less than zero, and the LCP is only solved if the objective function at the minimum is equal to zero. Since the seminal result of Karmarkar, it has been known that convex QPs are solvable in polynomial time [Karmarkar 1984].
The difficulty of solving an LCP is characterized by the properties of its particular matrix, namely the class of matrices it belongs to. Classes include, for example, Q, Q₀, P, P₀, copositive, and Z matrices. [Cottle 1992] and [Murty 1998] (see pp. 224-230 in the latter) describe relevant matrix classes in more detail.
- [Cottle 1992] R. Cottle, J.-S. Pang, and R. Stone. The Linear
Complementarity Problem. Academic Press, 1992.
- [Karmarkar 1984] N. Karmarkar. A New Polynomial-Time Algorithm for
Linear Programming. Combinatorica, 4(4), pp. 373-395.
- [Murty 1988] K. Murty. Linear Complementarity, Linear and Nonlinear
Programming. Heldermann Verlag, 1988.
- __init__(self: pydrake.solvers.MobyLCPSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.MosekSolver
Bases:
pydrake.solvers.SolverInterface
An implementation of SolverInterface for the commercially-licensed MOSEK (TM) solver (https://www.mosek.com/).
Drake downloads and builds MOSEK™ automatically, but to enable it you must set the location of your license file as described in the documentation at https://drake-mit-edu.ezproxy.canberra.edu.au/bazel.html#mosek.
The MOSEKLM_LICENSE_FILE environment variable controls whether or not SolverInterface::enabled() returns true. Iff it is set to any non-empty value, then the solver is enabled; otherwise, the solver is not enabled.
Note
MOSEK™ only cares about the initial guess of integer variables. The initial guess of continuous variables are not passed to MOSEK™. If all the integer variables are set to some integer values, then MOSEK™ will be forced to compute the remaining continuous variable values as the initial guess. (MOSEK™ might change the values of the integer/binary variables in the subsequent iterations.) If the specified integer solution is infeasible or incomplete, MOSEK™ will simply ignore it. For more details, check https://docs.mosek.com/10.1/capi/tutorial-mio-shared.html?highlight=initial
MOSEK™ supports many solver parameters. You can refer to the full list of parameters in https://docs.mosek.com/10.1/capi/param-groups.html#doc-param-groups. On top of these parameters, we also provide the following additional parameters
- “writedata”
set to a file name so that MOSEK™ solver will write the optimization model to this file. check https://docs.mosek.com/10.1/capi/solver-io.html#saving-a-problem-to-a-file for more details. The supported file extensions are listed in https://docs.mosek.com/10.1/capi/supported-file-formats.html#doc-shared-file-formats. Set this parameter to “” if you don’t want to write to a file. Default is not to write to a file.
- __init__(self: pydrake.solvers.MosekSolver) None
- static AcquireLicense() pydrake.solvers.MosekSolver.License
This acquires a MOSEK™ license environment shared among all MosekSolver instances; the environment will stay valid as long as at least one shared_ptr returned by this function is alive. Call this ONLY if you must use different MathematicalProgram instances at different instances in time, and repeatedly acquiring the license is costly (e.g., requires contacting a license server).
- static id() pydrake.solvers.SolverId
- class License
Context-manageable license from
MosekSolver.AcquireLicense()
.- __init__(*args, **kwargs)
- is_valid(self: pydrake.solvers.MosekSolver.License) bool
Indicates that this has a valid license that has not been released.
- class pydrake.solvers.MosekSolverDetails
The MOSEK™ solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<MosekSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property optimizer_time
The MOSEK™ optimization time. Please refer to MSK_DINF_OPTIMIZER_TIME in https://docs.mosek.com/10.1/capi/constants.html?highlight=msk_dinf_optimizer_time
- property rescode
The response code returned from MOSEK™ solver. Check https://docs.mosek.com/10.1/capi/response-codes.html for the meaning on the response code.
- property solution_status
The solution status after solving the problem. Check https://docs.mosek.com/10.1/capi/accessing-solution.html and https://docs.mosek.com/10.1/capi/constants.html#mosek.solsta for the meaning on the solution status.
- class pydrake.solvers.NloptSolver
Bases:
pydrake.solvers.SolverInterface
- __init__(self: pydrake.solvers.NloptSolver) None
- static AlgorithmName() str
The key name for the string-valued algorithm.
- static ConstraintToleranceName() str
The key name for the double-valued constraint tolerance.
- static id() pydrake.solvers.SolverId
- static MaxEvalName() str
The key name for int-valued maximum number of evaluations.
- static XAbsoluteToleranceName() str
The key name for double-valued x absolute tolerance.
- static XRelativeToleranceName() str
The key name for double-valued x relative tolerance.
- class pydrake.solvers.NloptSolverDetails
The NLopt solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<NloptSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property status
The return status of NLopt solver. Please refer to https://nlopt.readthedocs.io/en/latest/NLopt_Reference/#return-values.
- class pydrake.solvers.OsqpSolver
Bases:
pydrake.solvers.SolverInterface
A wrapper to call OSQP using Drake’s MathematicalProgram.
For details about OSQP’s available options, refer to the OSQP manual. Drake uses OSQP’s default values for all options except following: - Drake defaults to
polish=true
(upstream default isFalse
). - Drake defaults toadaptive_rho_interval=ADAPTIVE_RHO_FIXED
to make the output deterministic (upstream default is0
, which uses non-deterministic timing measurements to establish the interval). N.B. Generally the interval should be an integer multiple ofcheck_termination
, so if you choose to override either option you should probably override both at once.- __init__(self: pydrake.solvers.OsqpSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.OsqpSolverDetails
The OSQP solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<OsqpSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property dual_res
Norm of dual residue.
- property iter
Number of iterations taken.
- property polish_time
Time taken for polish phase (seconds).
- property primal_res
Norm of primal residue.
- property run_time
Total OSQP time (seconds).
- property setup_time
Time taken for setup phase (seconds).
- property solve_time
Time taken for solve phase (seconds).
- property status_val
Status of the solver at termination. Please refer to https://github.com/oxfordcontrol/osqp/blob/master/include/constants.h
- property y
y contains the solution for the Lagrangian multiplier associated with l <= Ax <= u. The Lagrangian multiplier is set only when OSQP solves the problem. Notice that the order of the linear constraints are linear inequality first, and then linear equality constraints.
- class pydrake.solvers.PerspectiveQuadraticCost
Bases:
pydrake.solvers.Cost
If \(z = Ax + b,\) implements a cost of the form:
\[(z_1^2 + z_2^2 + ... + z_{n-1}^2) / z_0.\]Note that this cost is convex when we additionally constrain z_0 > 0. It is treated as a generic nonlinear objective by most solvers.
Costs of this form are sometimes referred to as “quadratic over linear”.
- __init__(self: pydrake.solvers.PerspectiveQuadraticCost, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]]) None
Construct a cost of the form (z_1^2 + z_2^2 + … + z_{n-1}^2) / z_0 where z = Ax + b.
- Parameter
A
: Linear term.
- Parameter
b
: Constant term.
- Parameter
- A(self: pydrake.solvers.PerspectiveQuadraticCost) numpy.ndarray[numpy.float64[m, n]]
- b(self: pydrake.solvers.PerspectiveQuadraticCost) numpy.ndarray[numpy.float64[m, 1]]
- UpdateCoefficients(self: pydrake.solvers.PerspectiveQuadraticCost, new_A: numpy.ndarray[numpy.float64[m, n]], new_b: numpy.ndarray[numpy.float64[m, 1]]) None
Updates the coefficients of the cost. Note that the number of variables (columns of A) cannot change.
- Parameter
new_A
: New linear term.
- Parameter
new_b
: New constant term.
- Parameter
- class pydrake.solvers.PositiveSemidefiniteConstraint
Bases:
pydrake.solvers.Constraint
Implements a positive semidefinite constraint on a symmetric matrix S
\[\text{ S is p.s.d\]}
namely, all eigen values of S are non-negative.
Note
if the matix S has 1 row, then it is better to impose a linear inequality constraints; if it has 2 rows, then it is better to impose a rotated Lorentz cone constraint, since a 2 x 2 matrix S being p.s.d is equivalent to the constraint [S(0, 0), S(1, 1), S(0, 1)] in the rotated Lorentz cone.
- __init__(self: pydrake.solvers.PositiveSemidefiniteConstraint, rows: int) None
Impose the constraint that a symmetric matrix with size
rows
xrows
is positive semidefinite.See also
MathematicalProgram::AddPositiveSemidefiniteConstraint() for how to use this constraint on some decision variables. We currently use this constraint as a place holder in MathematicalProgram, to indicate the positive semidefiniteness of some decision variables.
- Parameter
rows
: The number of rows (and columns) of the symmetric matrix.
Note
rows
should be a positive integer. If `rows`==1 or `rows`==2, then consider imposing a linear inequality or rotated Lorentz cone constraint respectively.Example:
Click to expand C++ code...
// Create a MathematicalProgram object. auto prog = MathematicalProgram(); // Add a 3 x 3 symmetric matrix S to optimization program as new decision // variables. auto S = prog.NewSymmetricContinuousVariables<3>("S"); // Impose a positive semidefinite constraint on S. std::shared_ptr<PositiveSemidefiniteConstraint> psd_constraint = prog.AddPositiveSemidefiniteConstraint(S); ///////////////////////////////////////////////////////////// // Add more constraints to make the program more interesting, // but this is not needed. // Add the constraint that S(1, 0) = 1. prog.AddBoundingBoxConstraint(1, 1, S(1, 0)); // Minimize S(0, 0) + S(1, 1) + S(2, 2). prog.AddLinearCost(Eigen::RowVector3d(1, 1, 1), {S.diagonal()}); ///////////////////////////////////////////////////////////// // Now solve the program. auto result = Solve(prog); // Retrieve the solution of matrix S. auto S_value = GetSolution(S, result); // Compute the eigen values of the solution, to see if they are // all non-negative. Vector6d S_stacked; S_stacked << S_value.col(0), S_value.col(1), S_value.col(2); Eigen::VectorXd S_eigen_values; psd_constraint->Eval(S_stacked, S_eigen_values); std::cout<<"S solution is: " << S << std::endl; std::cout<<"The eigen value of S is " << S_eigen_values << std::endl;
- Parameter
- matrix_rows(self: pydrake.solvers.PositiveSemidefiniteConstraint) int
- class pydrake.solvers.ProgramAttribute
Members:
kGenericCost : A generic cost, doesn’t belong to any specific cost type
kGenericConstraint : A generic constraint, doesn’t belong to any specific
kQuadraticCost : A quadratic function as the cost.
kQuadraticConstraint : A constraint on a quadratic function.
kLinearCost : A linear function as the cost.
kLinearConstraint : A constraint on a linear function.
kLinearEqualityConstraint : An equality constraint on a linear function.
kLinearComplementarityConstraint : A linear complementarity constraint in
kLorentzConeConstraint : A Lorentz cone constraint.
kRotatedLorentzConeConstraint : A rotated Lorentz cone constraint.
kPositiveSemidefiniteConstraint : A positive semidefinite constraint.
kExponentialConeConstraint : An exponential cone constraint.
kL2NormCost : An L2 norm |Ax+b|
kBinaryVariable : Variable taking binary value {0, 1}.
kCallback : Supports callback during solving the problem.
- __init__(self: pydrake.solvers.ProgramAttribute, value: int) None
- kBinaryVariable = <ProgramAttribute.kBinaryVariable: 13>
- kCallback = <ProgramAttribute.kCallback: 14>
- kExponentialConeConstraint = <ProgramAttribute.kExponentialConeConstraint: 11>
- kGenericConstraint = <ProgramAttribute.kGenericConstraint: 1>
- kGenericCost = <ProgramAttribute.kGenericCost: 0>
- kL2NormCost = <ProgramAttribute.kL2NormCost: 12>
- kLinearComplementarityConstraint = <ProgramAttribute.kLinearComplementarityConstraint: 7>
- kLinearConstraint = <ProgramAttribute.kLinearConstraint: 5>
- kLinearCost = <ProgramAttribute.kLinearCost: 4>
- kLinearEqualityConstraint = <ProgramAttribute.kLinearEqualityConstraint: 6>
- kLorentzConeConstraint = <ProgramAttribute.kLorentzConeConstraint: 8>
- kPositiveSemidefiniteConstraint = <ProgramAttribute.kPositiveSemidefiniteConstraint: 10>
- kQuadraticConstraint = <ProgramAttribute.kQuadraticConstraint: 3>
- kQuadraticCost = <ProgramAttribute.kQuadraticCost: 2>
- kRotatedLorentzConeConstraint = <ProgramAttribute.kRotatedLorentzConeConstraint: 9>
- property name
- property value
- class pydrake.solvers.ProgramType
A coarse categorization of the optimization problem based on the type of constraints/costs/variables. Notice that Drake chooses the solver based on a finer category; for example we have a specific solver for equality-constrained convex QP.
Members:
kLP : Linear Programming, with a linear cost and linear constraints.
kQP : Quadratic Programming, with a convex quadratic cost and linear
constraints.
kSOCP : Second-order Cone Programming, with a linear cost and second-order
cone constraints.
kSDP : Semidefinite Programming, with a linear cost and positive semidefinite
matrix constraints.
kGP : Geometric Programming, with a linear cost and exponential cone
constraints.
kCGP : Conic Geometric Programming, this is a superset that unifies GP and
SDP. Refer to http://people.lids.mit.edu.ezproxy.canberra.edu.au/pari/cgp_preprint.pdf for more details.
kMILP : Mixed-integer Linear Programming. LP with some variables taking binary
values.
kMIQP : Mixed-integer Quadratic Programming. QP with some variables taking
binary values.
kMISOCP : Mixed-integer Second-order Cone Programming. SOCP with some variables
taking binary values.
kMISDP : Mixed-integer Semidefinite Programming. SDP with some variables taking
binary values.
kQuadraticCostConicConstraint : convex quadratic cost with nonlinear conic constraints.
kNLP : nonlinear programming. Programs with generic costs or constraints.
kLCP : Linear Complementarity Programs. Programs with linear complementary
constraints and no cost.
kUnknown : Does not fall into any of the types above.
- __init__(self: pydrake.solvers.ProgramType, value: int) None
- kCGP = <ProgramType.kCGP: 5>
- kGP = <ProgramType.kGP: 4>
- kLCP = <ProgramType.kLCP: 12>
- kLP = <ProgramType.kLP: 0>
- kMILP = <ProgramType.kMILP: 6>
- kMIQP = <ProgramType.kMIQP: 7>
- kMISDP = <ProgramType.kMISDP: 9>
- kMISOCP = <ProgramType.kMISOCP: 8>
- kNLP = <ProgramType.kNLP: 11>
- kQP = <ProgramType.kQP: 1>
- kQuadraticCostConicConstraint = <ProgramType.kQuadraticCostConicConstraint: 10>
- kSDP = <ProgramType.kSDP: 3>
- kSOCP = <ProgramType.kSOCP: 2>
- kUnknown = <ProgramType.kUnknown: 13>
- property name
- property value
- class pydrake.solvers.PyFunctionConstraint
Bases:
pydrake.solvers.Constraint
Constraint with its evaluator as a Python function
- __init__(*args, **kwargs)
- set_bounds(self: pydrake.solvers.PyFunctionConstraint, lower_bound: numpy.ndarray[numpy.float64[m, 1]], upper_bound: numpy.ndarray[numpy.float64[m, 1]]) None
Set both the lower and upper bounds of the constraint.
- UpdateLowerBound(self: pydrake.solvers.PyFunctionConstraint, new_lb: numpy.ndarray[numpy.float64[m, 1]]) None
Update the lower bound of the constraint.
- UpdateUpperBound(self: pydrake.solvers.PyFunctionConstraint, new_ub: numpy.ndarray[numpy.float64[m, 1]]) None
Update the upper bound of the constraint.
- class pydrake.solvers.QuadraticConstraint
Bases:
pydrake.solvers.Constraint
lb ≤ .5 xᵀQx + bᵀx ≤ ub Without loss of generality, the class stores a symmetric matrix Q. For a non-symmetric matrix Q₀, we can define Q = (Q₀ + Q₀ᵀ) / 2, since xᵀQ₀x = xᵀQ₀ᵀx = xᵀ*(Q₀+Q₀ᵀ)/2 *x. The first equality holds because the transpose of a scalar is the scalar itself. Hence we can always convert a non-symmetric matrix Q₀ to a symmetric matrix Q.
- __init__(self: pydrake.solvers.QuadraticConstraint, Q0: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], b: numpy.ndarray[numpy.float64[m, 1]], lb: float, ub: float, hessian_type: Optional[pydrake.solvers.QuadraticConstraint.HessianType] = None) None
Construct a quadratic constraint.
- Template parameter
DerivedQ
: The type for Q.
- Template parameter
Derivedb
: The type for b.
- Parameter
Q0
: The square matrix. Notice that Q₀ does not have to be symmetric.
- Parameter
b
: The linear coefficient.
- Parameter
lb
: The lower bound.
- Parameter
ub
: The upper bound.
- Parameter
hessian_type
: (optional) Indicates the type of Hessian matrix Q0. If hessian_type is not std::nullopt, then the user guarantees the type of Q0. If hessian_type=std::nullopt, then QuadraticConstraint will check the type of Q0. To speed up the constructor, set hessian_type != std::nullopt if you can. If this type is set incorrectly, then the downstream code (for example the solver) will malfunction.
- Raises
RuntimeError if Q0 isn't a square matrix, or b.rows() != –
Q0.rows() –
- Template parameter
- b(self: pydrake.solvers.QuadraticConstraint) numpy.ndarray[numpy.float64[m, 1]]
- hessian_type(self: pydrake.solvers.QuadraticConstraint) pydrake.solvers.QuadraticConstraint.HessianType
- class HessianType
Whether the Hessian matrix is positive semidefinite, negative semidefinite, or indefinite.
Members:
kPositiveSemidefinite :
kNegativeSemidefinite :
kIndefinite :
- __init__(self: pydrake.solvers.QuadraticConstraint.HessianType, value: int) None
- kIndefinite = <HessianType.kIndefinite: 2>
- kNegativeSemidefinite = <HessianType.kNegativeSemidefinite: 1>
- kPositiveSemidefinite = <HessianType.kPositiveSemidefinite: 0>
- property name
- property value
- is_convex(self: pydrake.solvers.QuadraticConstraint) bool
Returns if this quadratic constraint is convex.
- Q(self: pydrake.solvers.QuadraticConstraint) numpy.ndarray[numpy.float64[m, n]]
The symmetric matrix Q, being the Hessian of this constraint.
- UpdateCoefficients(self: pydrake.solvers.QuadraticConstraint, new_Q: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], new_b: numpy.ndarray[numpy.float64[m, 1]], hessian_type: Optional[pydrake.solvers.QuadraticConstraint.HessianType] = None) None
Updates the quadratic and linear term of the constraint. The new matrices need to have the same dimension as before.
- Parameter
new_Q
: new quadratic term
- Parameter
new_b
: new linear term
- Parameter
hessian_type
: (optional) Indicates the type of Hessian matrix Q0. If hessian_type is not std::nullopt, then the user guarantees the type of Q0. If hessian_type=std::nullopt, then QuadraticConstraint will check the type of Q0. To speed up the constructor, set hessian_type != std::nullopt if you can.
- Parameter
- class pydrake.solvers.QuadraticCost
Bases:
pydrake.solvers.Cost
Implements a cost of the form
\[.5 x'Qx + b'x + c\].
- __init__(self: pydrake.solvers.QuadraticCost, Q: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]], c: float, is_convex: Optional[bool] = None) None
Constructs a cost of the form
\[.5 x'Qx + b'x + c\].
- Parameter
Q
: Quadratic term.
- Parameter
b
: Linear term.
- Parameter
c
: (optional) Constant term.
- Parameter
is_hessian_psd
: (optional) Indicates if the Hessian matrix Q is positive semidefinite (psd) or not. If set to true, then the user guarantees that Q is psd; if set to false, then the user guarantees that Q is not psd. If set to std::nullopt, then the constructor will check if Q is psd or not. The default is std::nullopt. To speed up the constructor, set is_hessian_psd to either true or false.
- Parameter
- b(self: pydrake.solvers.QuadraticCost) numpy.ndarray[numpy.float64[m, 1]]
- c(self: pydrake.solvers.QuadraticCost) float
- is_convex(self: pydrake.solvers.QuadraticCost) bool
Returns true if this cost is convex. A quadratic cost if convex if and only if its Hessian matrix Q is positive semidefinite.
- Q(self: pydrake.solvers.QuadraticCost) numpy.ndarray[numpy.float64[m, n]]
Returns the symmetric matrix Q, as the Hessian of the cost.
- UpdateCoefficients(self: pydrake.solvers.QuadraticCost, new_Q: numpy.ndarray[numpy.float64[m, n]], new_b: numpy.ndarray[numpy.float64[m, 1]], new_c: float = 0, is_convex: Optional[bool] = None) None
Updates the quadratic and linear term of the constraint. The new matrices need to have the same dimension as before.
- Parameter
new_Q
: New quadratic term.
- Parameter
new_b
: New linear term.
- Parameter
new_c
: (optional) New constant term.
- Parameter
is_hessian_psd
: (optional) Indicates if the Hessian matrix Q is positive semidefinite (psd) or not. If set to true, then the user guarantees that Q is psd; if set to false, then the user guarantees that Q is not psd. If set to std::nullopt, then this function will check if Q is psd or not. The default is std::nullopt. To speed up the computation, set is_hessian_psd to either true or false.
- Parameter
- class pydrake.solvers.RemoveFreeVariableMethod
SDPA format doesn’t accept free variables, namely the problem it solves is in this form P1
max tr(C * X) s.t tr(Aᵢ*X) = aᵢ X ≽ 0.
Notice that the decision variable X has to be in the proper cone X ≽ 0, and it doesn’t accept free variable (without the conic constraint). On the other hand, most real-world applications require free variables, namely problems in this form P2
max tr(C * X) + dᵀs s.t tr(Aᵢ*X) + bᵢᵀs = aᵢ X ≽ 0 s is free.
In order to remove the free variables, we consider three approaches. 1. Replace a free variable s with two variables s = p - q, p ≥ 0, q ≥ 0. 2. First write the dual of the problem P2 as D2
min aᵀy s.t ∑ᵢ yᵢAᵢ - C = Z Z ≽ 0 Bᵀ * y = d,
where bᵢᵀ is the i’th row of B. The last constraint Bᵀ * y = d means y = ŷ + Nt, where Bᵀ * ŷ = d, and N is the null space of Bᵀ. Hence, D2 is equivalent to the following problem, D3
min aᵀNt + aᵀŷ s.t ∑ᵢ tᵢFᵢ - (C -∑ᵢ ŷᵢAᵢ) = Z Z ≽ 0,
where Fᵢ = ∑ⱼ NⱼᵢAⱼ. D3 is the dual of the following primal problem P3 without free variables
max tr((C-∑ᵢ ŷᵢAᵢ)*X̂) + aᵀŷ s.t tr(FᵢX̂) = (Nᵀa)(i) X̂ ≽ 0.
Then (X, s) = (X̂, B⁻¹(a - tr(Aᵢ X̂))) is the solution to the original problem P2. 3. Add a slack variable t, with the Lorentz cone constraint t ≥ sqrt(sᵀs).
Members:
kNullspace : Approach 2, reformulate the dual problem by considering the nullspace
of the linear constraint in the dual.
kTwoSlackVariables : Approach 1, replace a free variable s as s = y⁺ - y⁻, y⁺ ≥ 0, y⁻ ≥ 0.
kLorentzConeSlack : Approach 3, add a slack variable t with the lorentz cone constraint t
≥ sqrt(sᵀs).
- __init__(self: pydrake.solvers.RemoveFreeVariableMethod, value: int) None
- kLorentzConeSlack = <RemoveFreeVariableMethod.kLorentzConeSlack: 3>
- kNullspace = <RemoveFreeVariableMethod.kNullspace: 2>
- kTwoSlackVariables = <RemoveFreeVariableMethod.kTwoSlackVariables: 1>
- property name
- property value
- class pydrake.solvers.RotatedLorentzConeConstraint
Bases:
pydrake.solvers.Constraint
Constraining that the linear expression \(z=Ax+b\) lies within rotated Lorentz cone. A vector z ∈ ℝ ⁿ lies within rotated Lorentz cone, if
\[z_0 \ge 0\\]z_1 ge 0z_0 z_1 ge z_2^2 + z_3^2 + … + z_{n-1}^2
where A ∈ ℝ ⁿˣᵐ, b ∈ ℝ ⁿ are given matrices.
For more information and visualization, please refer to https://docs.mosek.com/modeling-cookbook/cqo.html (Fig 3.1)
- __init__(self: pydrake.solvers.RotatedLorentzConeConstraint, A: numpy.ndarray[numpy.float64[m, n]], b: numpy.ndarray[numpy.float64[m, 1]]) None
- Raises
RuntimeError if A.rows() < 3. –
- A(self: pydrake.solvers.RotatedLorentzConeConstraint) scipy.sparse.csc_matrix[numpy.float64]
Getter for A.
- b(self: pydrake.solvers.RotatedLorentzConeConstraint) numpy.ndarray[numpy.float64[m, 1]]
Getter for b.
- UpdateCoefficients(self: pydrake.solvers.RotatedLorentzConeConstraint, new_A: numpy.ndarray[numpy.float64[m, n], flags.f_contiguous], new_b: numpy.ndarray[numpy.float64[m, 1]]) None
Updates the coefficients, the updated constraint is z=new_A * x + new_b in the rotated Lorentz cone.
- Raises
RuntimeError if the new_A.cols() != A.cols(), namely the variable –
size should not change. –
- Precondition:
new_A.rows() >= 3 and new_A.rows() == new_b.rows().
- class pydrake.solvers.ScsSolver
Bases:
pydrake.solvers.SolverInterface
- __init__(self: pydrake.solvers.ScsSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.ScsSolverDetails
The SCS solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<ScsSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property dual_objective
Dual objective value at termination. Equal to SCS_INFO.dobj
- property duality_gap
duality gap. Equal to SCS_INFO.gap.
- property iter
These are the information returned by SCS at termination, please refer to “SCS_INFO” struct in https://github.com/cvxgrp/scs/blob/master/include/scs.h Number of iterations taken at termination. Equal to SCS_INFO.iter
- property primal_objective
Primal objective value at termination. Equal to SCS_INFO.pobj
- property primal_residue
Primal equality residue. Equal to SCS_INFO.res_pri
- property residue_infeasibility
infeasibility certificate residue. Equal to SCS_INFO.res_infeas
- property residue_unbounded_a
unbounded certificate residue. Equal to SCS_INFO.res_unbdd_a
- property residue_unbounded_p
unbounded certificate residue. Equal to SCS_INFO.res_unbdd_p
- property s
The primal equality constraint slack, namely Ax + s = b where x is the primal variable.
- property scs_setup_time
Time taken for SCS to setup in milliseconds. Equal to SCS_INFO.setup_time.
- property scs_solve_time
Time taken for SCS to solve in millisecond. Equal to SCS_INFO.solve_time.
- property scs_status
The status of the solver at termination. Please refer to https://github.com/cvxgrp/scs/blob/master/include/glbopts.h Note that the SCS code on github master might be slightly more up-to-date than the version used in Drake.
- property y
The dual variable values at termination.
- class pydrake.solvers.SemidefiniteRelaxationOptions
Configuration options for the MakeSemidefiniteRelaxation. Throughout these options, we refer to the variables of the original optimization program as y, and the semidefinite variable of the associate relaxation as X.
X has the structure X = [Y, y] [yᵀ, one]
- __init__(self: pydrake.solvers.SemidefiniteRelaxationOptions, **kwargs) None
- property add_implied_linear_constraints
Given a program with the linear equality constraints Ay = b, sets whether to add the implied linear constraints [A, -b]X = 0 to the semidefinite relaxation.
- property add_implied_linear_equality_constraints
Given a program with the linear constraints Ay ≤ b, sets whether to add the implied linear constraints [A,-b]X[A,-b]ᵀ ≤ 0 to the semidefinite relaxation.
- property preserve_convex_quadratic_constraints
Given a program with convex quadratic constraints, sets whether equivalent rotated second order cone constraints are enforced on the last column of X in the semidefinite relaxation. This ensures that the last column of X, which are a relaxation of the original program’s variables satisfy the original progam’s quadratic constraints.
- set_to_strongest(self: pydrake.solvers.SemidefiniteRelaxationOptions) None
Configure the semidefinite relaxation options to provide the strongest possible semidefinite relaxation that we currently support. This in general will give the tightest convex relaxation we support, but the longest solve times.
- set_to_weakest(self: pydrake.solvers.SemidefiniteRelaxationOptions) None
Configure the semidefinite relaxation options to provide the weakest semidefinite relaxation that we currently support. This in general will create the loosest convex relaxation we support, but the shortest solve times. This is equivalent to the standard Shor Relaxation (see Quadratic Optimization Problems by NZ Shor or Semidefinite Programming by Vandenberghe and Boyd).
- class pydrake.solvers.SnoptSolver
Bases:
pydrake.solvers.SolverInterface
An implementation of SolverInterface for the commercially-licensed SNOPT solver (https://ccom.ucsd.edu/~optimizers/solvers/snopt/).
Builds of Drake from source do not compile SNOPT by default, so therefore SolverInterface::available() will return false. You must opt-in to build SNOPT per the documentation at https://drake-mit-edu.ezproxy.canberra.edu.au/bazel.html#snopt.
Drake’s pre-compiled binary releases do incorporate SNOPT, so therefore SolverInterface::available() will return true. Thanks to Philip E. Gill and Elizabeth Wong for their kind support.
There is no license configuration required to use SNOPT, but you may set the environtment variable
DRAKE_SNOPT_SOLVER_ENABLED
to “0” to force-disable SNOPT, in which case SolverInterface::enabled() will return false.- __init__(self: pydrake.solvers.SnoptSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.SnoptSolverDetails
The SNOPT solver details after calling Solve() function. The user can call MathematicalProgramResult::get_solver_details<SnoptSolver>() to obtain the details.
- __init__(*args, **kwargs)
- property F
The final value of the vector of problem functions F(x).
- property Fmul
The final value of the dual variables (Lagrange multipliers) for the general constraints F_lower <= F(x) <= F_upper.
- property info
The snopt INFO field. Please refer to section 8.6 in “User’s Guide for SNOPT Version 7: Software for Large-Scale Nonlinear Programming” (https://web.stanford.edu/group/SOL/guides/sndoc7.pdf) by Philip E. Gill to interpret the INFO field.
- property solve_time
The duration of the snopt solve in seconds.
- property xmul
The final value of the dual variables for the bound constraint x_lower <= x <= x_upper.
- class pydrake.solvers.SolutionResult
Members:
kSolutionFound : Found the optimal solution.
kInvalidInput : Invalid input.
kInfeasibleConstraints : The primal is infeasible.
kUnbounded : The primal is unbounded.
kSolverSpecificError : Solver-specific error. (Try
MathematicalProgramResult::get_solver_details() or enabling verbose solver output.)
kInfeasibleOrUnbounded : The primal is either infeasible or unbounded.
kIterationLimit : Reaches the iteration limits.
kDualInfeasible : Dual problem is infeasible. In this case we cannot infer the status of
the primal problem.
kSolutionResultNotSet : The initial (invalid) solution result. This value should be
overwritten by the solver during Solve().
- __init__(self: pydrake.solvers.SolutionResult, value: int) None
- kDualInfeasible = <SolutionResult.kDualInfeasible: -7>
- kInfeasibleConstraints = <SolutionResult.kInfeasibleConstraints: -2>
- kInfeasibleOrUnbounded = <SolutionResult.kInfeasibleOrUnbounded: -5>
- kInvalidInput = <SolutionResult.kInvalidInput: -1>
- kIterationLimit = <SolutionResult.kIterationLimit: -6>
- kSolutionFound = <SolutionResult.kSolutionFound: 0>
- kSolutionResultNotSet = <SolutionResult.kSolutionResultNotSet: -8>
- kSolverSpecificError = <SolutionResult.kSolverSpecificError: -4>
- kUnbounded = <SolutionResult.kUnbounded: -3>
- property name
- property value
- pydrake.solvers.Solve(prog: pydrake.solvers.MathematicalProgram, initial_guess: Optional[numpy.ndarray[numpy.float64[m, 1]]] = None, solver_options: Optional[pydrake.solvers.SolverOptions] = None) pydrake.solvers.MathematicalProgramResult
Solves an optimization program, with optional initial guess and solver options. This function first chooses the best solver depending on the availability of the solver and the program formulation; it then constructs that solver and call the Solve function of that solver. The optimization result is stored in the return argument.
- Parameter
prog
: Contains the formulation of the program, and possibly solver options.
- Parameter
initial_guess
: The initial guess for the decision variables.
- Parameter
solver_options
: The options in addition to those stored in
prog
. For each option entry (like print out), there are 4 ways to set that option, and the priority given to the solver options is as follows (from lowest / least, to highest / most): 1. common option set on the MathematicalProgram itself 2. common option passed as an argument to Solve 3. solver-specific option set on the MathematicalProgram itself 4. solver-specific option passed as an argument to Solve
- Returns
result The result of solving the program through the solver.
- Parameter
- pydrake.solvers.SolveInParallel(*args, **kwargs)
Overloaded function.
SolveInParallel(progs: list[pydrake.solvers.MathematicalProgram], initial_guesses: Optional[list[Optional[numpy.ndarray[numpy.float64[m, 1]]]]] = None, solver_options: Optional[list[Optional[pydrake.solvers.SolverOptions]]] = None, solver_ids: Optional[list[Optional[pydrake.solvers.SolverId]]] = None, parallelism: pydrake.common.Parallelism = Parallelism(num_threads=16), dynamic_schedule: bool = False) -> list[pydrake.solvers.MathematicalProgramResult]
Solves progs[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given, by invoking the solver at solver_ids[i] if provided. If solver_ids[i] is nullopt then the best available solver is selected for progs[i] depending on the availability of the solver and the problem formulation. If solver_ids == nullptr then this is done for every progs[i].
Uses at most parallelism cores, with static scheduling by default.
- Parameter
dynamic_schedule
: If dynamic_schedule is false then static scheduling is used and so each core will solve approximately 1/parallelism of the programs. This is most efficient when all the programs take approximately the same amount of time to solve. If dynamic_schedule is true, then dynamic scheduling is used and all the programs are queued into a single pool and each core will take the next program off the queue when it becomes available. This is best when each program takes a dramatically different amount of time to solve.
Note
When using a proprietary solver (e.g. Mosek) your organization may have limited license seats. It is recommended that the number of parallel solves does not exceed the total number of license seats.
Note
Only programs which are thread safe are solved concurrently. Programs that are not thread safe will be solved sequentially in a thread safe manner.
- Raises
RuntimeError if initial_guess and solver_options are provided and –
not the same size as progs. –
RuntimeError if any of the progs are nullptr. –
RuntimeError if any of the programs cannot be solved. –
SolveInParallel(progs: list[pydrake.solvers.MathematicalProgram], initial_guesses: Optional[list[Optional[numpy.ndarray[numpy.float64[m, 1]]]]] = None, solver_options: pydrake.solvers.SolverOptions = None, solver_id: Optional[pydrake.solvers.SolverId] = None, parallelism: pydrake.common.Parallelism = Parallelism(num_threads=16), dynamic_schedule: bool = False) -> list[pydrake.solvers.MathematicalProgramResult]
Provides the same functionality as SolveInParallel, but allows for specifying a single solver id and solver option that is used when solving all programs.
- Raises
RuntimeError if the provided solver cannot solve all of progs. –
RuntimeError if initial_guesses are provided and not the same size –
as progs. –
RuntimeError if any of the progs are nullptr. –
- class pydrake.solvers.SolverId
Identifies a SolverInterface implementation.
A moved-from instance is guaranteed to be empty and will not compare equal to any non-empty ID.
- __init__(self: pydrake.solvers.SolverId, name: str) None
Constructs a specific, known solver type. Internally, a hidden integer is allocated and assigned to this instance; all instances that share an integer (including copies of this instance) are considered equal. The solver names are not enforced to be unique, though we recommend that they remain so in practice.
For best performance, choose a name that is 15 characters or less, so that it fits within the libstdc++ “small string” optimization (“SSO”).
- name(self: pydrake.solvers.SolverId) str
- class pydrake.solvers.SolverInterface
Interface used by implementations of individual solvers.
- __init__(self: pydrake.solvers.SolverInterface) None
- AreProgramAttributesSatisfied(self: pydrake.solvers.SolverInterface, prog: pydrake.solvers.MathematicalProgram) bool
Returns true iff the program’s attributes are compatible with this solver’s capabilities.
- available(self: pydrake.solvers.SolverInterface) bool
Returns true iff support for this solver has been compiled into Drake. When this method returns false, the Solve method will throw.
Most solver implementations will always return true, but certain solvers may have been excluded at compile-time due to licensing restrictions, or to narrow Drake’s dependency footprint. In Drake’s default build, only commercially-licensed solvers might return false.
Contrast this with enabled(), which reflects whether a solver has been configured for use at runtime (not compile-time).
For details on linking commercial solvers, refer to the solvers’ class overview documentation, e.g., SnoptSolver, MosekSolver, GurobiSolver.
- enabled(self: pydrake.solvers.SolverInterface) bool
Returns true iff this solver is properly configured for use at runtime. When this method returns false, the Solve method will throw.
Most solver implementation will always return true, but certain solvers require additional configuration before they may be used, e.g., setting an environment variable to specify a license file or license server. In Drake’s default build, only commercially-licensed solvers might return false.
Contrast this with available(), which reflects whether a solver has been incorporated into Drake at compile-time (and has nothing to do with the runtime configuration). A solver where available() returns false may still return true for enabled() if it is properly configured.
The mechanism to configure a particular solver implementation is specific to the solver in question, but typically uses an environment variable. For details on configuring commercial solvers, refer to the solvers’ class overview documentation, e.g., SnoptSolver, MosekSolver, GurobiSolver.
- ExplainUnsatisfiedProgramAttributes(self: pydrake.solvers.SolverInterface, prog: pydrake.solvers.MathematicalProgram) str
Describes the reasons (if any) why the program is incompatible with this solver’s capabilities. If AreProgramAttributesSatisfied would return true for the program, then this function returns the empty string.
- Solve(*args, **kwargs)
Overloaded function.
Solve(self: pydrake.solvers.SolverInterface, prog: pydrake.solvers.MathematicalProgram, initial_guess: Optional[numpy.ndarray[numpy.float64[m, 1]]], solver_options: Optional[pydrake.solvers.SolverOptions], result: pydrake.solvers.MathematicalProgramResult) -> None
Solves an optimization program with optional initial guess and solver options. Note that these initial guess and solver options are not written to
prog
. If theprog
has set an option for a solver, andsolver_options
contains a different value for the same option on the same solver, thensolver_options
takes priority. Derived implementations of this interface may elect to throw RuntimeError for badly formed programs.Solve(self: pydrake.solvers.SolverInterface, prog: pydrake.solvers.MathematicalProgram, initial_guess: Optional[numpy.ndarray[numpy.float64[m, 1]]] = None, solver_options: Optional[pydrake.solvers.SolverOptions] = None) -> pydrake.solvers.MathematicalProgramResult
Like SolverInterface::Solve(), but the result is a return value instead of an output argument.
- solver_id(self: pydrake.solvers.SolverInterface) pydrake.solvers.SolverId
Returns the identifier of this solver.
- solver_type(self: pydrake.solvers.SolverInterface) Optional[pydrake.solvers.SolverType]
- SolverName(self: pydrake.solvers.SolverInterface) str
- class pydrake.solvers.SolverOptions
Stores options for multiple solvers. This interface does not do any verification of solver parameters. It does not even verify that the specified solver exists. Use this only when you have particular knowledge of what solver is being invoked, and exactly what tuning is required.
Supported solver names/options:
“SNOPT” – Parameter names and values as specified in SNOPT User’s Guide section 7.7 “Description of the optional parameters”, used as described in section 7.5 for snSet(). The SNOPT user guide can be obtained from https://web.stanford.edu/group/SOL/guides/sndoc7.pdf
“IPOPT” – Parameter names and values as specified in IPOPT users guide section “Options Reference” https://coin-or.github.io/Ipopt/OPTIONS.html
“NLOPT” – Parameter names and values are specified in https://nlopt.readthedocs.io/en/latest/NLopt_C-plus-plus_Reference/ (in the Stopping criteria section). Besides these parameters, the user can specify “algorithm” using a string of the algorithm name. The complete set of algorithms is listed in “nlopt_algorithm_to_string()” function in github.com/stevengj/nlopt/blob/master/src/api/general.c. If you would like to use certain algorithm, for example NLOPT_LD_SLSQP, call
SetOption(NloptSolver::id(), NloptSolver::AlgorithmName(), "LD_SLSQP");
“GUROBI” – Parameter name and values as specified in Gurobi Reference Manual, section 10.2 “Parameter Descriptions” https://www.gurobi.com/documentation/10.0/refman/parameters.html
“SCS” – Parameter name and values as specified in the struct SCS_SETTINGS in SCS header file https://github.com/cvxgrp/scs/blob/master/include/scs.h Note that the SCS code on github master might be more up-to-date than the version used in Drake.
“MOSEK™” – Parameter name and values as specified in Mosek Reference https://docs.mosek.com/9.3/capi/parameters.html
“OSQP” – Parameter name and values as specified in OSQP Reference https://osqp.org/docs/interfaces/solver_settings.html#solver-settings
“Clarabel” – Parameter name and values as specified in Clarabel https://oxfordcontrol.github.io/ClarabelDocs/stable/api_settings/ Note that
direct_solve_method
is not supported in Drake yet. Clarabel’s boolean options should be passed as integers (0 or 1).“CSDP” – Parameter name and values as specified at https://manpages.ubuntu.com/manpages/focal/en/man1/csdp-randgraph.1.html
- __init__(self: pydrake.solvers.SolverOptions) None
- common_solver_options(self: pydrake.solvers.SolverOptions) dict[pydrake.solvers.CommonSolverOption, Union[float, int, str]]
Gets the common options for all solvers. Refer to CommonSolverOption for more details.
- get_print_file_name(self: pydrake.solvers.SolverOptions) str
Returns the kPrintFileName set via CommonSolverOption, or else an empty string if the option has not been set.
- get_print_to_console(self: pydrake.solvers.SolverOptions) bool
Returns the kPrintToConsole set via CommonSolverOption, or else false if the option has not been set.
- get_standalone_reproduction_file_name(self: pydrake.solvers.SolverOptions) str
Returns the kStandaloneReproductionFileName set via CommonSolverOption, or else an empty string if the option has not been set.
- GetOptions(self: pydrake.solvers.SolverOptions, solver_id: pydrake.solvers.SolverId) dict
- SetOption(*args, **kwargs)
Overloaded function.
SetOption(self: pydrake.solvers.SolverOptions, solver_id: pydrake.solvers.SolverId, solver_option: str, option_value: float) -> None
Sets a double-valued solver option for a specific solver.
SetOption(self: pydrake.solvers.SolverOptions, solver_id: pydrake.solvers.SolverId, solver_option: str, option_value: int) -> None
Sets an integer-valued solver option for a specific solver.
SetOption(self: pydrake.solvers.SolverOptions, solver_id: pydrake.solvers.SolverId, solver_option: str, option_value: str) -> None
Sets a string-valued solver option for a specific solver.
SetOption(self: pydrake.solvers.SolverOptions, key: pydrake.solvers.CommonSolverOption, value: Union[float, int, str]) -> None
Sets a common option for all solvers supporting that option (for example, printing the progress in each iteration). If the solver doesn’t support the option, the option is ignored.
- class pydrake.solvers.SolverType
This type only exists for backwards compatibility, and should not be used in new code.
Members:
kClp :
kCsdp :
kEqualityConstrainedQP :
kGurobi :
kIpopt :
kLinearSystem :
kMobyLCP :
kMosek :
kNlopt :
kOsqp :
kScs :
kSnopt :
kUnrevisedLemke :
- __init__(self: pydrake.solvers.SolverType, value: int) None
- kClp = <SolverType.kClp: 0>
- kCsdp = <SolverType.kCsdp: 1>
- kEqualityConstrainedQP = <SolverType.kEqualityConstrainedQP: 2>
- kGurobi = <SolverType.kGurobi: 3>
- kIpopt = <SolverType.kIpopt: 4>
- kLinearSystem = <SolverType.kLinearSystem: 5>
- kMobyLCP = <SolverType.kMobyLCP: 6>
- kMosek = <SolverType.kMosek: 7>
- kNlopt = <SolverType.kNlopt: 8>
- kOsqp = <SolverType.kOsqp: 9>
- kScs = <SolverType.kScs: 11>
- kSnopt = <SolverType.kSnopt: 10>
- kUnrevisedLemke = <SolverType.kUnrevisedLemke: 12>
- property name
- property value
- class pydrake.solvers.UnrevisedLemkeSolver
Bases:
pydrake.solvers.SolverInterface
A class for the Unrevised Implementation of Lemke Algorithm’s for solving Linear Complementarity Problems (LCPs). See MobyLcpSolver for a description of LCPs. This code makes extensive use of the following document: [Dai 2018] Dai, H. and Drumwright, E. Computing the Principal Pivoting Transform for Solving Linear Complementarity Problems with Lemke’s Algorithm. (2018, located in doc/pivot_column.pdf).
- __init__(self: pydrake.solvers.UnrevisedLemkeSolver) None
- static id() pydrake.solvers.SolverId
- class pydrake.solvers.VisualizationCallback
Bases:
pydrake.solvers.EvaluatorBase
Defines a simple evaluator with no outputs that takes a callback function pointer. This is intended for debugging / visualization of intermediate results during an optimization (for solvers that support it).
- __init__(*args, **kwargs)