5. Constrained optimization¶
TensCalc can be used to generate code to solve optimizations or Nash equilibria. All solvers use a primal-dual interior-point method and share several common parameters.
5.1. Types of code¶
C code
C code is self-contained and library free (aside from the C standard library). The C code is encapsulated into a dynamic library that can be linked to MATLAB© or used independently.
This type of code is extremely efficient and fast for small to medium-size problems; typically up to a few thousands of variables and constraints. In this domain C code is typically 10-100 times faster.
In addition to the C code, a MATLAB© class is also generated to set parameter values and call the solver from within MATLAB©.
The scripts that generate C code start with the prefix
cmex2
.
MATLAB©
MATLAB© code is integrated into a MATLAB© class that is used to set parameter values and call the solver.
The scripts that generate MATLAB© code start with the prefix
class2
.
Note
The MATLAB© classes that call the C and the MATLAB© solvers appear indistinguishable to the user, aside from the speed of execution.
5.2. Minimization¶
TensCalc can generate optimized code to solve constrained optimizations of the form
where the variable x
can include multiple tensors and the equality
and inequality constraints can be expressed by equalities and
inequalities involving multiple tensors.
The following two scripts are used to generate code to solve this type of constrained minimization
- cmex2optimizeCS(parameter1, value1, parameter2, values2, ...)¶
- class2optimizeCS(parameter1, value1, parameter2, values2, ...)¶
- Parameters
parameter1 (
string
) – parameter to setvalue1 (
type depends on the parameter
) – parameter to setparameter2 (
string
) – parameter to setvalue2 (
type depends on the parameter
) – parameter to set, …
- Returns
name of the MATLAB© class created
- Return type
string
Both scripts take several parameters (many of them optional) in the form of pairs, consisting of a parameter name and its value, as in:
classname=cmex2optimizeCS('classname','myoptimization', ...
'objective',norm2(x-y), ...
'optimizationVariables', { x, y }, ...
'constraints', { x>=-1, x<=1, y>=10 }, ...
'outputExpressions', { x, y });
Alternative, parameters to the scripts can be passed using a structure, as in:
opt.classname='myoptimization';
opt.objective=norm2(x-y);
opt.optimizationVariables={ x, y };
opt.constraints={ x>=-1, x<=1, y>=10 };
opt.outputExpressions={ x, y };
classname=cmex2optimizeCS(opt);
Note
This type of parameter passing and validation is enabled by the FunParTools toolbox, which also enables several other advanced features. See Pedigrees.
The function cmex2optimizeCS()
generates C code, whereas
class2optimizeCS()
generates MATLAB© code, but both functions take
the same set of parameters and generate MATLAB© classes that are
indistinguishable to the user.
The following table list the most commonly used parameters for
cmex2optimizeCS()
and class2optimizeCS()
. For the full set of
parameters, use:
cmex2optimizeCS help
class2optimizeCS help
Parameter |
Allowed values |
Description |
---|---|---|
‘optimizationVariables’ |
cell-array of |
Variables to be optimized. |
‘objective’ |
scalar |
Criterion to optimize. |
‘constraints’ |
cell-array of |
Optimization constraints. Warning For constraint satisfaction, there is no difference between
|
‘outputExpressions’ |
cell-array of |
Expressions (typically involving the optimization variables) that the solver should return upon termination. Warning MATLAB© does not support sparse arrays with more than 2
dimensions, therefore to include in cmex2optimizeCS('classname','myoptimization', ...
'objective',norm2(x-x0), ...
'optimizationVariables', { x }, ...
'parameters', {x0, ...
'constraints', { x>=-1, x<=1 }, ...
'outputExpressions', { full(x) });
|
‘parameters’ |
cell-array of |
Parameters that the optimization criterion and constraints may depends upon; to be provided to the solver, but not optimized. |
‘sensitivityVariables’ |
cell-array of |
Optimization variables with respect to which we want to compute cost sensitivity. |
‘classname’ |
string |
Name of the class to be created. A MATLAB© class will be
created with this name plus a See Solver Class. Warning This parameter should not be used with pedigrees
are enabled using |
‘folder’ |
string |
Path to the folder where the class and cmex files will be created. The folder will be created if it does not exist and it will be added to the begining of the path if not there already. |
‘simulinkLibrary’ |
string |
Name of a simulink library to be created with Simulink blocks that can be used to call the different solver functions. The blocks are created with direct feedthrough. No library and no simulink blocks are created if simulinkLibrary is an empty string. |
‘gradTolerance’ |
positive number, default 1e-4 |
Maximum norm for the gradient below which the first order optimality conditions assumed to by met. |
‘equalTolerance’ |
positive number, default 1e-4 |
Maximum norm for the vector of equality constraints below which the equalities are assumed to hold. |
‘desiredDualityGap’ |
positive number, default 1e-5 |
Value for the duality gap that triggers the end of the constrained optimization. The overall optimization terminates at the end of the first Newton step for which the duality gap becomes smaller than this value. |
‘solverVerboseLevel’ |
integer |
Level of verbose for the solver outputs:
|
‘addEye2Hessian’ |
nonnegative real, default 1e-9 |
Add to the Hessian matrix appropriate identity matrices scaled by this constant. A larger value for addEye2Hessian has two main effects:
Both effects improve the robustness of the solver, but this is typically achieved at the expense of slower convergence. For convex problems, one typically chooses addEye2Hessian equal to the square root of the machine precision. For non-convex problems, one can try to increase this parameter when the Newton direction actually causes and increase of the Lagragian. |
‘muFactorAggressive’ |
real in the interval (0,1), default .3333 |
Multiplicative factor used to update the barrier parameter mu.
This value is used when there is good progress along the Newton
direction. Nice convex problems can take as low as 1/100, but
poorly conditioned problems may require as high as 1/3. This
parameter is only used when |
‘muFactorConservative’ |
real in the interval (0,1), default .75 |
Multiplicative factor used to update the barrier parameter mu (must be smaller than 1). This value is used when there is poor or no progress along the Newton direction. A value not much smaller than one is preferable. |
‘skipAffine’ |
[ |
When false the barrier parameter |
‘delta’ |
[2,3], default 3 |
Delta parameter used to determine mu based on the affine search
direction step: Set |
‘alphaMin’ |
real in the interval (0,1], default 1e-7 |
Minimum value for the scalar gain in Newton’s method line search, below which a search direction is declared to have failed. |
‘alphaMax’ |
real in the interval (0,1], default 1 |
Maximum value for the scalar gain in Newton’s method line search. Should only be set lower than 1 for very poorly scaled problems. |
‘useLDL’ |
[ |
When In general, the LDL factorization leads to faster code.
However, the current implementation is restricted to a pure
diagonal matrix (no 2x2 blocks in the D factor) so it may fail
with the message ‘ldl needs pivoting’. If this happens either
set |
‘smallerNewtonMatrix’ |
[ |
When |
‘compilerOptimization’ |
[ |
Optimization parameters passed to the C compiler.
This parameter is only used for C-code solvers. |
‘minInstructions4loop’ |
integer, default 50 |
Minimum number of similar instruction that will be execute as
part of a This parameter is only used for C-code solvers. |
‘maxInstructionsPerFunction’ |
integer, default 100 |
Maximum number of instructions to be included into a single
function. When equal to Large values of This parameter is only used for C-code solvers. |
‘absolutePath’ |
[ |
When When This parameter is only used for C-code solvers. |
‘pedigreeClass’ |
string |
When nonempty, the function outputs are saved to a set of files. All files in the set will be characterized by a ‘pedigree’, which decribes all the input parameters that were used in the script. This variable contains the name of the file class and may include a path. This is supported by the FunParTools toolbox. See See Pedigrees |
‘executeScript’ |
[ |
Determines whether or not the body of the function should be executed:
See Pedigrees. |
Warning
cmex2optimizeCS()
and class2optimizeCS()
only solve for first-order
optimality conditions so they may produce either a minimium or a
maximum (or a saddle-point). It is up to the user to restrict the
search domain to make sure that the desired optimum was found, or
test if that was the case after the fact. The Hess_
output can
help in that regard.
5.2.1. Special variables to include in 'outputExpressions'
¶
The following Tcalculus
variables are assigned special values and
can be using in outputExpressions
:
lambda1_
,lambda2_
, …Lagrangian multipliers associated with the inequalities constraints (in the order that they appear and with the same size as the corresponding constraints)
nu1_
,nu2_
, …Lagrangian multipliers associated with the equality constraints (in the order that they appear and with the same size as the corresponding constraints)
Hess_
Hessian matrix used by the (last) Newton step to update the primal variables (not including addEye2Hessian).
dHess_
D factor in the LDL factorization of the Hessian matrix used by the (last) Newton step to update the primal variables (including addEye2Hessian, unlike
Hess_
).Grad_
gradient of Lagrangian at the (last) Newton step.
mu_
barrier parameter at the (last) Newton step.
u_
vector stacked with all primal variables at the (last) Newton step.
F_
vector stacked with all equalities at the (last) Newton step.
G_
vector stacked with all inequalities at the (last) Newton step.
nu_
vector stacked with all dual equality variables at the (last) Newton step.
lambda_
vector stacked with all dual inequality variables at the (last) Newton step.
Warning
To be able to include these variables as input parameters, they
have to be previously created using Tvariable()
with the
appropriate sizes. Eventually, their values will be overridden by
the solver to reflect the values listed above.
5.2.2. Sensitivity variables¶
Warning
This section of the documentation is still incomplete.
- Tvars2optimizeCS(parameter1, value1, parameter2, values2, ...)¶
Parameter |
Allowed values |
Description |
---|---|---|
‘optimizationVariables’ |
cell-array of |
Variables to be optimized. |
‘sensitivityVariables’ |
cell-array of |
Optimization variables with respect to which we want to compute cost sensitivity. |
‘objective’ |
scalar |
Criterion to optimize. |
‘constraints’ |
cell-array of |
Optimization constraints. |
‘addEye2Hessian’ |
nonnegative real, default 1e-9 |
Add to the Hessian matrix appropriate identity matrices scaled by this constant. A larger value for addEye2Hessian has two main effects: |
‘smallerNewtonMatrix’ |
[ |
When |
5.2.3. Solver Class¶
The MATLAB© class created to set parameter values and call the solver from within MATLAB© has the following methods:
- obj=classname()
creates class and, for C code, loads the dynamic library containing the C code
- delete(obj)¶
deletes the class and, for C code, unloads the dynamic library
- setP_{parameter}(obj,value)
sets the value of one of the parameters
- setV_{variable}(obj,value)
sets the value of one of the optimization variables
- [y1,y2, ...]=getOutputs(obj)
gets the values of the
outputExpressions
- [status,iter,time]=solve(obj,mu0,int32(maxIter),int32(saveIter))
- Parameters
mu0 – initial value for the barrier variable
maxIter – maximum number of Newton iterations
saveIter –
iteration # when to save the “hessian” matrix (for subsequent pivoting/permutations/scaling optimization) only saves when allowSave is true.
When
saveIter=0
, the hessian matrix is saved at the last iteration; andWhen
saveIter<0
, the hessian matrix is not saved.
The “hessian” matrix will be saved regardless of the value of
saveIter
, when the solver exists withstatus=4
.
- Returns status
solver exist status
0 = success
>0 = solver terminated unexpectedly
nonzero status indicates the reason for termination in a binary format:
bit 0 = 1 - (primal) variables violate constraints
bit 1 = 1 - dual variables are negative
bit 2 = 1 - failed to invert hessian
bit 3 = 1 - maximum # of iterations reached
when the solver exists because the maximum # of iterations was reached (bit 3 = 1), the remaining bits provide information about the solution returned
bit 4 = 1 - gradient larger then
gradTolerance
bit 5 = 1 - equality constraints violate
equalTolerance
bit 6 = 1 - duality gap larger than
desiredDualityGap
bit 7 = 1 - barrier variable larger than minimum value
bit 8 = 1 - scalar gain alpha in Newton direction smaller than
alphaMin
bit 9 = 1 - scalar gain alpha in Newton direction smaller than .1
bit 10 = 1 - scalar gain alpha in Newton direction smaller than .5
- Returns iter
number of iterations
- Returns time
solver’s compute time (in secs).
5.2.4. Pedigrees¶
Pedigrees can save a lot of time for functions that take some time to execute, like the TensCalc’s functions that generate code:
|cmex2optimizeCS| |cmex2equilibriumLatentCS| |cmex2compute|
|class2optimizeCS| |class2equilibriumLatentCS| |class2compute|
Essentially, every time a function is executed with pedigrees enables, all its inputs are saved in a pedigree file and the function’s outputs are also saved. In case the function produces output files (e.g., C or MATLAB© code), the files are stored with unique names for possible subsequent reuse. When the function is called again, it checks whether it has been previously called with the same exact inputs:
If it has, then the previously saved outputs can be retrieved from the appropriate files and the function does not need to be recomputed.
Otherwise, the function is executed and an additional pedigree and the associated outputs are saved for potential subsequent use.
Pedigrees are enables by specifying the input parameter
'pedigreeClass'
, which is the common prefix used for the names of
all the files used to save the pedigree and outputs of the
function. Each time the function is called, this prefix is augmented
with a unique suffix that reflects the date and time the function was
called. To be precise, the first date/time the function was called
with that particular set of inputs. Any subsequent calls with the same
exact inputs reuse the previously saved pedigree.
The following call to cmex2optimizeCS()
enables the used of pedigrees:
classname=cmex2optimizeCS('pedigreeClass','tmp_myopt', ...
'executeScript','asneeded',...
'objective',norm2(x-y), ...
'optimizationVariables', { x, y }, ...
'constraints', { x>=-1, x<=1, y>=10 }, ...
'outputExpressions', { x, y });
and an instance of the solver generated by this call to
cmex2optimizeCS()
can be created using:
obj=feval(classname);
- In the call to
cmex2optimizeCS()
, the parameter pair: 'pedigreeClass','tmp_myopt'
specifies that pedigrees should be enabled and that all pedigree files
should start with the prefix tmp_myopt
. The cmex2optimizeCS()
parameter pair:
'executeScript','asneeded'
further specifies that the cmex2optimizeCS()
function should only be
executed if “it is needed”. Specifically, if cmex2optimizeCS()
has
previously been called with the same exact set of inputs, then
cmex2optimizeCS()
should not be executed and, instead, the
previously generated code should be reused. Alternatively,
'executeScript','yes'
would specify that the code should be regenerated again even it
cmex2optimizeCS()
has been called before with the same exact inputs.
Warning
When pedigrees are enabled through 'pedigreeClass'
one should
not specify the class name using the parameter 'classname'
.
The actual classname will be chosen so that it is unique for a specific set of inputs and returned to the user as the output. In this way, regenerating code with a new set of inputs will not overwrite existing code.
Note
Every time a function that uses pedigrees is called with a different set of inputs, a new pedigree is created for potential subsequent reuse. Because of this, one can easily get 100s of pedigree files. The good news is that pedigree files can be safely removed, because they are simply used to save time.
It is a good practice to name all your pedigree files with a unique
prefix that marks the file as “safe-to-remove”. In all TensCalc
examples, we use the prefix 'tmp'
to mark these files, which
means that all files started with the 3 letters 'tmp'
are safe
to remove.
Note
Pedigrees are stored both as a .mat
file (for fast retrieval)
as well as a human-readable .html
file. In general, there is
little reason to dig into pedigree files, but all the inputs are
there in case one is curious about previous calls to
code-generation functions.
Pedigrees are enabled by the FunParTools toolbox and are available to any function that uses this toolbox to process input and output parameters.
5.3. Nash equilibrium¶
TensCalc can generate optimized code to compute Nash equilibrium
where the variables u
, d
, x
can include multiple tensors
and the equality and inequality constraints can be expressed by
equalities and inequalities involving multiple tensors.
The latent variable x
that appears in both minimizations, must
be fully determined by the equality constraints and is thus not really
a free optimization variable.
The following two scripts are used to generate code to compute this type of Nash equilibrium
- cmex2equilibriumLatentCS(parameter1, value1, parameter2, values2, ...)¶
- class2equilibriumLatentCS(parameter1, value1, parameter2, values2, ...)¶
- Parameters
parameter1 (
string
) – parameter to setvalue1 (
type depends on the parameter
) – parameter to setparameter2 (
string
) – parameter to setvalue2 (
type depends on the parameter
) – parameter to set, …
- Returns
name of the MATLAB© class created
- Return type
string
The function cmex2optimizeCS()
generates C code, whereas
class2optimizeCS()
generates MATLAB© code, but both functions take
the same set of parameters and generate MATLAB© classes that are
indistinguishable to the user.
Parameter |
Allowed values |
Description |
---|---|---|
‘P1optimizationVariables’ ‘P2optimizationVariables’ |
cell-array of |
Variables to be optimized by player 1 and player 2 |
‘P1objective’ ‘P2objective’ |
scalar |
Criteria to optimize for player 1 and player 2 |
‘P1constraints’ ‘P2constraints’ |
cell-array of |
Optimization constraints for player 1 and player 2 Warning For constraint satisfaction, there is no difference between
|
‘latentVariables’ |
cell-array of |
Latent optimization variables common to both players |
‘latentConstraints’ |
cell-array of |
Optimizations constraints common to both players that implicitely define the latext variables. |
‘outputExpressions’ |
cell-array of |
Expressions (typically involving the optimization variables) that the solver should return upon termination. See Special variables to include in 'outputExpressions' for Nash solver: |
5.3.1. Special variables to include in 'outputExpressions'
for Nash solver¶
The following Tcalculus
variables are assigned special values and
can be using in outputExpressions:
P1lambda1_
,P1lambda2_
, … ,P2lambda1_
,P2lambda2_
, …Lagrangian multipliers associated with the inequalities constraints for player 1 and 2 (in the order that they appear and with the same size as the corresponding constraints)
P1nu1_
,P1nu2_
, … ,P2nu1_
,P2nu2_
, …P1xnu1_
,P1xnu2_
, … ,P2xnu1_
,P2xnu2_
, …Lagrangian multipliers associated with the equality constraints player 1 and 2 (in the order that they appear and with the same size as the corresponding constraints). The
P1x
andP2x
variables correspond to thelatentConstraints
.Hess_
Hessian matrix used by the (last) Newton step to update the primal variables (not including addEye2Hessian).
Warning
To be able to include these variables as input parameters, they
have to be previously created using Tvariable()
with the
appropriate sizes. Eventually, their values will be overridden by
the solver to reflect the values listed above.