atomica.optimization¶
Implements various Optimizations in Atomica
This module implements the Optimization
class, which contains the
information required to perform an optimization in Atomica. An Optimization
effectively serves as a mapping from one set of program instructions to another.
Functions

Main user entry point for optimization 
Classes





Enforce quantity exceeds a value 

Enforce quantity is below a value 
Store conditions to satisfy during optimization 


Decrease quantity by percentage 

Parametric overwrite example 

Increase quantity by percentage 

Maximize overall conversion rate 





Optimization objective 



Instructions on how to perform an optimization 

Parametric overwrite with multiple programs 

Adjust program spending 

Optimize program start year 

Fix total spending 
Exceptions
Not possible to apply constraint 

Invalid initial parameter values 

Unresolvable (illposed) constraint 

class
atomica.optimization.
Adjustable
(name, limit_type='abs', lower_bound=inf, upper_bound=inf, initial_value=None)[source]¶

class
atomica.optimization.
AtLeastMeasurable
(measurable_name, t, threshold, pop_names=None)[source]¶ Enforce quantity exceeds a value
This Measurable imposes a penalty if the quantity is smaller than some threshold The initial points should be ‘valid’ in the sense that the quantity starts out above the threshold (and during optimization it will never be allowed to cross the threshold)
Typically, this Measurable would be used in money minimization in conjunction with measurables that aim to minimize spending.
The measurable returns
np.inf
if the condition is violated, and0.0
otherwise.
get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model – A
Model
object after integrationbaseline – The baseline variable returned by this
Measurable
at the start of optimization
 Returns
A scalar objective value


class
atomica.optimization.
AtMostMeasurable
(measurable_name, t, threshold, pop_names=None)[source]¶ Enforce quantity is below a value
This Measurable imposes a penalty if the quantity is larger than some threshold The initial points should be ‘valid’ in the sense that the quantity starts out below the threshold (and during optimization it will never be allowed to cross the threshold).
Typically, this Measurable would be used in conjunction with other measurables  for example, optimizing one quantity while ensuring another quantity does not cross a threshold.
The measurable returns
np.inf
if the condition is violated, and0.0
otherwise.
get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model – A
Model
object after integrationbaseline – The baseline variable returned by this
Measurable
at the start of optimization
 Returns
A scalar objective value


class
atomica.optimization.
Constraint
[source]¶ Store conditions to satisfy during optimization
A Constraint represents a condition that must be satisfied by the Instructions after the cumulative effect of all adjustments. The Instructions are rescaled to satisfy the constraint directly (rather than changing the value of the Adjustables) although this distinction really only matters in the context of parametric spending.

constrain_instructions
(instructions, hard_constraints)[source]¶ Apply constraint to instructions
Constrains the instructions, returns a metric penalizing the constraint If there is no penalty associated with adjusting (perhaps if all of the Adjustments are parametric?) then this would be 0.0. The penalty represents in some sense the quality of the constraint. For example, the default
TotalSpendConstraint
rescales spending such that the total spend matches a target value. The penalty reflects the distance between the requested spend and the constrained spend, so it is desirable to minimize it.If it is not possible to constrain the instructions, raise
FailedConstraint
. Parameters
instructions (
ProgramInstructions
) – TheProgramInstructions
instance to constrain (in place)hard_constraints – The hard constraint returned by
get_hard_constraint
 Return type
float
 Returns
A numeric penalty value. Return np.inf if constraint could not be satisfied

get_hard_constraint
(optimization, instructions)[source]¶ Return hard constraint from initial instructions
Often constraints can be specified relative to the initial conditions. For example, fixing total spend regardless of what the total spend is in the initial instructions. Therefore, during
constrain_instructions
, it is necessary to examine properties from the initial instructions in order to perform the constraining.This method is called at the very start of optimization, passing in the initial instructions. It then returns an arbitrary value that is passed back to the instance’s
constrain_instructions
during optimization. For example, consider the total spending constraintget_hard_constraint
would extract the total spend from the initial instructionsThis value is passed to
constrain_instructions
where it is used to rescale spending
Because subclasses implement both
get_hard_constraint
andconstrain_instructions
no assumptions need to be made about the value returned by this method  it simply needs to be paired toconstrain_instructions
. Parameters
optimization – An
Optimization
instructions (
ProgramInstructions
) – A set of initial instructions to extract absolute constraints from
 Returns
Arbitrary variable that will be passed back during
constrain_instructions


class
atomica.optimization.
DecreaseByMeasurable
(measurable_name, t, decrease, pop_names=None, target_type='frac')[source]¶ Decrease quantity by percentage
This Measurable stores the value of a quantity using the original instructions. It then requires that there is a minimum increase in the value of the quantity during optimization. For example
>>> DecreaseByMeasurable('deaths',2030,0.05)
This Measurable would correspond to an decrease of 5% in the number of deaths in 2030.
The measurable returns
np.inf
if the condition is violated, and0.0
otherwise. Parameters
measurable_name – The base measurable class accepts the name of a program (for spending) or a quantity supported by
Population.get_variable()
t – Single year, or a list of two start/stop years. If specifying a single year, that year must appear in the simulation output. The quantity will be summed over all simulation time points
decrease – The amount by which to decrease the measurable (e.g. 0.05 for a 5% decrease). Use
target_type='abs'
to specify an absolute decreasepop_names – The base Measurable class takes in the names of the populations to use. If multiple populations are provided, the objective will be added across the named populations
target_type – Specify fractional ‘frac’ or absolute ‘abs’ decrease (default is fractional)

get_baseline
(model)[source]¶ Return cached baseline values
Similar to
get_hard_constraint
, sometimes a relativeMeasurable
might be desired e.g. ‘Reduce deaths by at least 50%’. In that case, we need to perform a procedure similar to getting a hard constraint, where theMeasurable
receives an initialModel
object and extracts baseline data for subsequent use inget_objective_val
.Thus, the output of this function is paired to its usage in
get_objective_val
. Parameters
model –
 Return type
float
 Returns
The value to pass back to the
Measurable
during optimization

get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model (
Model
) – AModel
object after integrationbaseline (
float
) – The baseline variable returned by thisMeasurable
at the start of optimization
 Return type
float
 Returns
A scalar objective value

class
atomica.optimization.
ExponentialSpendingAdjustment
(prog_name, t, t_0, t_end, p1, a1, a2)[source]¶ Parametric overwrite example
This is an example of an Adjustment that uses a function of several variables to compute timedependent spending.

exception
atomica.optimization.
FailedConstraint
[source]¶ Not possible to apply constraint
This error gets raised if a
Constraint
is unable to transform the instructions given the supplied parameter values (but other values may be acceptable). It signals that the algorithm should proceed immediately to the next iteration.

class
atomica.optimization.
IncreaseByMeasurable
(measurable_name, t, increase, pop_names=None, target_type='frac')[source]¶ Increase quantity by percentage
This Measurable stores the value of a quantity using the original instructions. It then requires that there is a minimum increase in the value of the quantity during optimization. For example
>>> IncreaseByMeasurable('alive',2030,0.05)
This Measurable would correspond to an increase of 5% in the number of people alive in 2030.
The measurable returns
np.inf
if the condition is violated, and0.0
otherwise. Parameters
measurable_name – The base measurable class accepts the name of a program (for spending) or a quantity supported by
Population.get_variable()
t – Single year, or a list of two start/stop years. If specifying a single year, that year must appear in the simulation output. The quantity will be summed over all simulation time points
increase – The amount by which to increase the measurable (e.g. 0.05 for a 5% increase). Use
target_type='abs'
to specify an absolute increasepop_names – The base Measurable class takes in the names of the populations to use. If multiple populations are provided, the objective will be added across the named populations
target_type – Specify fractional ‘frac’ or absolute ‘abs’ increase (default is fractional)

get_baseline
(model)[source]¶ Return cached baseline values
Similar to
get_hard_constraint
, sometimes a relativeMeasurable
might be desired e.g. ‘Reduce deaths by at least 50%’. In that case, we need to perform a procedure similar to getting a hard constraint, where theMeasurable
receives an initialModel
object and extracts baseline data for subsequent use inget_objective_val
.Thus, the output of this function is paired to its usage in
get_objective_val
. Parameters
model –
 Return type
float
 Returns
The value to pass back to the
Measurable
during optimization

get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model (
Model
) – AModel
object after integrationbaseline (
float
) – The baseline variable returned by thisMeasurable
at the start of optimization
 Return type
float
 Returns
A scalar objective value

exception
atomica.optimization.
InvalidInitialConditions
[source]¶ Invalid initial parameter values
This error gets thrown if the initial conditions yield an objective value that is not finite

class
atomica.optimization.
MaximizeCascadeConversionRate
(cascade_name, t, pop_names='all', weight=1.0)[source]¶ Maximize overall conversion rate
Maximize conversion summed over all cascade stages
 Parameters
cascade_name – The name of one of the cascades in the Framework
t (
float
) – A single time value e.g. 2020pop_names – A single pop name (including ‘all’), a list of populations, or a dict/list of dicts, each with a single aggregation e.g.
{'foo':['04','514']}
weight – Weighting factor for this Measurable in the overall objective function

get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model – A
Model
object after integrationbaseline – The baseline variable returned by this
Measurable
at the start of optimization
 Returns
A scalar objective value

class
atomica.optimization.
MaximizeCascadeStage
(cascade_name, t, pop_names='all', weight=1.0, cascade_stage=1)[source]¶ 
get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model – A
Model
object after integrationbaseline – The baseline variable returned by this
Measurable
at the start of optimization
 Returns
A scalar objective value


class
atomica.optimization.
Measurable
(measurable_name, t, pop_names=None, weight=1.0)[source]¶ Optimization objective
A
Measurable
is a class that returns an objective value based on a simulatedModel
object. It takes in aModel
and returns a scalar value. Often, an optimization may contain multipleMeasurable
objects, and the objective value returned by each of them is summed together. Parameters
measurable_name – The base measurable class accepts the name of a program (for spending) or a quantity supported by
Population.get_variable()
t – Single year, or a list of two start/stop years. If specifying a single year, that year must appear in the simulation output. The quantity will be summed over all simulation time points
pop_names – The base Measurable class takes in the names of the populations to use. If multiple populations are provided, the objective will be added across the named populations
weight – The weight factor multiplies the quantity

get_baseline
(model)[source]¶ Return cached baseline values
Similar to
get_hard_constraint
, sometimes a relativeMeasurable
might be desired e.g. ‘Reduce deaths by at least 50%’. In that case, we need to perform a procedure similar to getting a hard constraint, where theMeasurable
receives an initialModel
object and extracts baseline data for subsequent use inget_objective_val
.Thus, the output of this function is paired to its usage in
get_objective_val
. Parameters
model –
 Returns
The value to pass back to the
Measurable
during optimization

get_objective_val
(model, baseline)[source]¶ Return objective value
This method should return the _unweighted_ objective value. Note that further transformation may occur
 Parameters
model (
Model
) – AModel
object after integrationbaseline – The baseline variable returned by this
Measurable
at the start of optimization
 Return type
float
 Returns
A scalar objective value

class
atomica.optimization.
Optimization
(name=None, adjustments=None, measurables=None, constraints=None, maxtime=None, maxiters=None, method='asd')[source]¶ Instructions on how to perform an optimization
The Optimization object stores the information that defines an optimization operation. Optimization can be thought of as a function mapping one set of program instructions to another set of program instructions. The parameters of that function are stored in the Optimization object, and amount to
A definition of optimality
A specification of allowed changes to the program instructions
Any additional information required by a particular optimization algorithm e.g. ASD
 Parameters
name –
adjustments – An Adjustment or list of Adjustment objects
measurables – A Measurable or list of Measurable objects
constraints – Optionally provide a Constraint or list of Constraint objects
maxtime – Optionally specify maximum ASD time
maxiters – Optionally specify maximum number of ASD iterations or hyperopt evaluations
method – One of [‘asd’,’pso’,’hyperopt’] to use  asd (to use normal ASD)  pso (to use particle swarm optimization from pyswarm)  hyperopt (to use hyperopt’s Bayesian optimization function)

compute_objective
(model, baselines)[source]¶ Return total objective function
This method accumulates the objective values returned by each
Measurable
, passing in the corresponding baseline values where required. Parameters
model – A simulated
Model
objectbaselines (
list
) – List of baseline values the same length as the number ofMeasurables
 Return type
float
 Returns
The total/net objective value

constrain_instructions
(instructions, hard_constraints)[source]¶ Apply all constraints inplace, return penalty
This method takes in the proposed instructions, and a list of hard constraints. Each constraint is applied to the instructions iteratively, passing in that constraint’s own hard constraint, and the penalty is accumulated and returned.
 Parameters
instructions (
ProgramInstructions
) – The current proposedProgramInstructions
hard_constraints (
list
) – A list of hard constraints the same length as the number of constraints
 Return type
float
 Returns
The total penalty value (if not finite, model integration will be skipped and the parameters will be rejected)

get_baselines
(pickled_model)[source]¶ Return Measurable baseline values
This method is run at the start of the optimize script, and is used to retrieve the baseline values for the Measurable. Note that the baseline values are obtained based on the original instructions (stored in the pickled model), independent of the initial parameters used for optimization. The logic is that the initial parameters for the optimization are a choice dictated by the numerics of optimization (e.g. needing to start from a particular part of the parameter space) rather than anything intrinsic to the problem, whereas the initial instructions reflect the actual baseline conditions.
 Parameters
pickled_model –
x0 – The initial parameter values
hard_constraints – List of hard constraint values
 Return type
list
 Returns
A list of Measurable baseline values

get_hard_constraints
(x0, instructions)[source]¶ Get hard constraints
This method calls
get_hard_constraint
on eachConstraint
in theOptimization
iteratively, and returns them as a list.Note that the initial optimization values
x0
are applied _before_ the hard constraint is computed. This ensures that the hard constraints are relative to the initial conditions in the optimization, not the initial instructions. For example, if a parametric overwrite is present, the hard constraint will be relative to whatever spending is produced by the initial values of the parametric overwrite. Parameters
x0 – The initial values for optimization  these are applied to the instructions prior to extracting hard constraints
instructions (
ProgramInstructions
) – The initial instructions
 Return type
list
 Returns
A list of hard constraints, as many items as there are constraints

get_initialization
(progset, instructions)[source]¶ Get initial values for each adjustment
The initial conditions depend nontrivially on both the progset and the instructions. Spending is present in the progset and optionally overwritten in the instructions. Therefore, it is necessary to check both when determining initial spending. Extraction of the initial values for each
Adjustment
is delegated to theAdjustment
itself.Note also that the return arrays have length equal to the number of
Adjustables
(since anAdjustment
may contain severalAdjustables
). Parameters
progset (
ProgramSet
) – The program set to extract initial conditions frominstructions (
ProgramInstructions
) – Instructions to extract initial conditions from
 Return type
tuple
 Returns
Tuple containing
(initial,low,high)
with arrays for  The initial value of each adjustable  The lower limit for each adjustable  The upper limit for each adjustable

maxiters
= None¶ Maximum number of ASD iterations or hyperopt evaluations

maxtime
= None¶ Maximum ASD time

method
= None¶ Optimization method name

update_instructions
(asd_values, instructions)[source]¶ Apply all Adjustments
This method takes in a list of values (same length as number of adjustables) and iteratively calls each
Adjustment
in the optimization to update the instructions (in place) Parameters
asd_values – A list of values
instructions (
ProgramInstructions
) – TheProgramInstructions
instance to update
 Return type
None

class
atomica.optimization.
PairedLinearSpendingAdjustment
(prog_names, t)[source]¶ Parametric overwrite with multiple programs
This example Adjustment demonstrates a parametric timevarying budget reaching more than one program. A single adjustable corresponding to the rate of change simultaneously acts on two programs in opposite directions

class
atomica.optimization.
SpendingAdjustment
(prog_name, t, limit_type='abs', lower=0.0, upper=inf, initial=None)[source]¶ Adjust program spending
This adjustment class represents making a spending quantity adjustable. By default, the base class simply overwrites the spending value at a particular point in time A SpendingAdjustment has a separate Adjustable for each time reached (independently)
 Parameters
prog_name – The code name of a program
t – A single time, or list/array of times at which to make adjustments
limit_type – Interpret
lower
andupper
as absolute or relative limits (should be'abs'
or'rel'
)lower – Lower bound (0 by default). A single value (used for all times) or a list/array the same length as
t
upper – Upper bound (
np.inf
by default). A single value (used for all times) or a list/array the same length ast
initial – Optionally specify the initial value, either as a scalar or list/array the same length as
t
. If not specified, the initial spend will be drawn from the program instructions, or the progset.

class
atomica.optimization.
StartTimeAdjustment
(name, lower, upper, initial)[source]¶ Optimize program start year
This is an example of an Adjustment that does not target a spending value

class
atomica.optimization.
TotalSpendConstraint
(total_spend=None, t=None, budget_factor=1.0)[source]¶ Fix total spending
This class implements a constraint on the total spend at every time point when a program is optimizable. A program is considered optimizable if an Adjustment reaches that program at the specified time. Spending is constrained independently at all times when any program is adjustable.
The
total_spend
argument allows the total spending in a particular year to be explicitly specified rather than drawn from the initial allocation. This can be useful when using parametric programs where the adjustables do not directly correspond to spending value.This constraint can also be set to only apply in certain years. The
budget_factor
multiplies the total spend at the time thehard_constraint
is assigned Typically this is to scale up the available spending when that spending is being drawn from the instructions/progset (otherwise the budget_factor could already be part of the specified total spend)Note that if no times are specified, the budget factor should be a scalar but no explicit spending values can be specified. This is because in the case where different programs are optimized in different years, an explicit total spending constraint applying to all times is unlikely to be a sensible choice (so we just ask the user to specify the time as well)
 Parameters
total_spend – A list of spending amounts the same size as t (can contain Nones), or None. For times in which the total spend is None, it will be automatically set to the sum of spending on optimizable programs in the corresponding year
t – A time, or list of times, at which to apply the total spending constraint. If None, it will automatically be set to all years in which spending adjustments are being made
budget_factor – The budget factor multiplies whatever the total_spend is. This can either be a single value, or a year specific value

constrain_instructions
(instructions, hard_constraints)[source]¶ Apply total spend constraint
 Parameters
instructions (
ProgramInstructions
) – TheProgramInstructions
instance to constrainhard_constraints (
dict
) – Dictionary of hard constraints
 Return type
float
 Returns
Distancelike difference between initial spending and constrained spending, np.inf if constraint failed

get_hard_constraint
(optimization, instructions)[source]¶ Return hard constraint dictionary
 Parameters
optimization –
Optimization
instanceinstructions (
ProgramInstructions
) – InitialProgramInstructions
 Return type
dict
 Returns

exception
atomica.optimization.
UnresolvableConstraint
[source]¶ Unresolvable (illposed) constraint
This error gets thrown if it is _impossible_ to satisfy the constraints. There are two modes of constraint failure  The constraint might not be satisfied on this iteration, but could be satisfied by other
parameter values
The constraint is impossible to satisfy because it is inconsistent (for example, if the total spend is greater than the sum of the upper bounds on all the individual programs) in which case the algorithm cannot continue
This error gets raised in the latter case, while the former should result in the iteration being skipped

atomica.optimization.
_objective_fcn
(x, pickled_model, optimization, hard_constraints, baselines)[source]¶ Return objective value
This wrapper function takes in a vector of proposed parameters and returns the objective value. It is typically not called directly  instead, it is partialled to bind all arguments except
x
and then passed to whichever optimization algorithm is used to optimizex
 Parameters
x – Vector of proposed parameter values
pickled_model – A pickled
Model
 should contain a set of instructionsoptimization – An
Optimization
hard_constraints (
list
) – A list of hard constraints (should be the same length asoptimization.constraints
)baselines (
list
) – A list of measurable baselines (should be the same length asoptimization.measurables
)
 Returns

atomica.optimization.
optimize
(project, optimization, parset, progset, instructions, x0=None, xmin=None, xmax=None, hard_constraints=None, baselines=None)[source]¶ Main user entry point for optimization
The optional inputs x0, xmin, xmax and hard_constraints are used when performing parallel optimization (implementation not complete yet), in which case they are computed by the parallel wrapper to optimize(). Normally these variables would not be specified by users, because they are computed from the Optimization together with the instructions (because relative constraints in the Optimization are interpreted as being relative to the allocation in the instructions).
 Parameters
project – A
Project
instanceoptimization – An
Optimization
instanceparset (
ParameterSet
) – AParameterSet
instanceprogset (
ProgramSet
) – AProgramSet
instanceinstructions (
ProgramInstructions
) – AProgramInstructions
instancex0 – Not for manual use  override initial values
xmin – Not for manual use  override lower bounds
xmax – Not for manual use  override upper bounds
hard_constraints – Not for manual use  override hard constraints
baselines – Not for manual use  override Measurable baseline values (for relative Measurables)
 Returns
A
ProgramInstructions
instance representing optimal instructions