There’s a neat way to model decision making as an (integer) linear program for managing a collection of resources. In my head it’s called the “state-dynamics” model. In other places I’ve seen it, such as this set of notes, it’s an unnamed “formulette,” and they describe it as a “periodic review of a continuous process.” That name also makes sense, because you can view this model as periodically sampling a continuous function, and optimizing over an (approximate) integral of the function.

I’ll explain the situation with bananas. Say you’re managing an inventory of bananas. You occasionally get orders from customers to buy your bananas, and you occasionally place orders for new bananas from wholesale suppliers. As you read this, you may substitute “bananas” for any good that you can bucket over time and according to specific features.

You can model this by introducing “state” variables
representing the quantity of bananas in stock,
say `Quantity_t`

,
where `t`

is a regularly sampled time across
some predetermined planning horizon,
say monthly samples for a year.
Then you add “dynamics” constraints that look like

Quantity_{t+1} = Quantity_t + Purchase_[t,t+1] - Order_1 - Order_2 - ...

Where `Purchase_[t,t+1]`

is an integer variable
representing the amount of bananas
the solver will choose
for you to purchase during the month,
and `Order_i`

is the quantity ordered
by a customer order
(which, say, occurs once per order instance).

Then you add constraints like

Quantity_0 = CURRENT_STOCK Quantity_t >= SAFETY_FLOOR for all t

The solver gets to “solve” for the `Quantity_i`

,
but any system that respects the dynamics constraints
is literally tracking the state of banana inventory each month.
The dynamics constraints ensure that
each subsequent state variable is consistent
with the previous state and the changes in each interval.
And the `CURRENT_STOCK`

constraint is the base case.
So the only “choices” for the state variables
are forced through the *real* decision variables,
which are the purchases.
Though, the solver doesn’t “know” that
when it solves the model.

The optimization objective can be something like, “minimize the sum of purchase costs plus the sum of inventory storage costs,” which can be written as a linear function of the purchase variables and state variables, respectively. Now you have yourself a tidy little inventory management model. Because we’re sampling over time, the total cost is more like an integral over time, and so you’d want to apply a quadrature rule to the samples to get area. A good one that is linear is the trapezoid rule.

This model seems quaint, but it has some hidden surprises that make it a very flexible and efficient formulation.

In solving an integer program, the performance is closely related to the number of integer variables the solver must solve for. This is because non-integer variables can be solved efficiently using the simplex method (and related methods), while integer variables must go through a branch-and-bound exponential search.

The state-dynamics model above is nice because
the only integer variables are the purchase variables.
That is, the state variables can all be marked as continuous,
and treated as continuous by the solver,
and it just so happens that if `CURRENT_STOCK`

is an integer,
and if each `Order_i`

is an integer,
then all the state variable values
are forced to be integer for free,
because integer values are the only feasible solutions.
The solver will find them without a fuss.

This is a commonly used trick, where the organization of the model ensures some variables must be integer in any feasible solution. If you can do this with ALL your variables, then you can use a much faster linear solver. In our case, the purchase variables still need to be integer.

Imagine you have multiple brands of bananas now,
and you have to decide which bananas to use to fulfill
customer orders. In this case, the `Quantity_t`

state variables gain an index,
like `Quantity_{Dole, t}`

or `Quantity_{Chiquita, t}`

,
and the corresponding dynamics variables
and purchase variables get a new index.

Then the `Order_i`

become new (integer) decision variables
instead of just constants,
like `Order_{i, Dole}`

,
whose solution value counts
how many Dole bananas are used to fulfill order `i`

.

Similarly, your inventory might be spread
across many warehouses,
with time delays between purchasing some types of bananas
and having them arrive at a warehouse.
This can all be incorporated into the dynamics constraints,
with a little bit of extra work to track the relation
between a purchase in month `t_1`

and the later month `t_2`

that the purchased inventory arrives
and can be used to fulfill orders.

Another nice way the dynamics constraints generalize, is that they can incorporate any sort of fixed changes to the inventory stock. For example, you might have committed purchase orders for new bananas, and committed customer orders. These would be incorporated into the dynamics constraints as constants instead of variables.

A natural version of this model involves purchase limits, i.e., statements from your supplier that they can only deliver so much.

In such cases,
you might get into a situation
in which the banana demand
exceeds your supply at some point during the year.
Without any special work,
the solver will crash and say “this model is infeasible.”
In this case,
the constraints `Quantity_{Dole, t} >= SAFETY_FLOOR`

are impossible to satisfy,
because orders force it to dip below the floor,
but it is not possible to purchase enough bananas to recover,
due to the purchase limits or time delays.
Infeasibility grinds the system to a halt,
preventing the solver from even making partial
decisions about what to buy or fulfill.

A nice workaround is to use slack variables.
In this case, you’d add a new continuous variable
`z_{Dole, t} >= 0`

and modify the safety floor constraint to be
`Quantity_{Dole, t} + z_{Dole, t} >= SAFETY_FLOOR`

.
This allows the solver to always satisfy this constraint,
so to prevent the solver from using the `z`

unless absolutely necessary,
you add an objective term of
`+ C * z_{Dole, t}`

,
where `C`

is a sufficiently large constant,
say, 100x the largest non-slack coefficient in the objective.
This has the effect of making fake bananas
infinitely available for purchase,
but their “purchase cost” is too prohibitive
for the fake bananas to be optimal unless
the model is otherwise infeasible.

If every non dynamics constraint has this sort of slack variable incorporated, then the model can never be infeasible by design. On top of that, the slack variables act as an alerting mechanism. If a slack variable is nonzero in the solution, your system should report it to a human operator. This means that there is some defect that cannot be corrected without human intervention. Maybe they will cancel some orders, or rush-order more bananas to fix the problem. If the slack variables are set up well, each reported defect is localized (see next section), and the “blast radius” of the defect can be calculated.

This slack variable business
is a lot more nuanced than it seems!
First, the choice of `C`

is tricky,
since if it’s too large it will cause numerical instability,
and if it’s too small the solver will use it to cheat
and invent bananas out of thin air.

Second, it’s not clear how critical it is
that I put the slack variable
in the safety floor constraint.
For example, you might think to put
the `z_{Dole,t}`

in the dynamics constraint.
It might seem especially good to do it that way
if the infeasibility appears to come from unclean data,
because it would “correct” the error and keep it corrected
in later months.

Doing it that way introduces subtle bugs, which I’ve personally made, and I’ve watched colleagues make but missed it in my review. The following situation explains. Say your initial banana inventory/order data is incorrect. In one warehouse, the starting inventory is 100 bananas, and the first month has 200 orders worth of bananas, which is infeasible. Maybe someone forgot to mark an order as canceled. Imagine further that all banana purchases take one month to arrive, so the solver can’t buy new bananas and is forced to use slack variables to stay feasible.

Now say you put the slack variable
in the dynamics constraint,
and there’s an incoming (committed) purchase order for 500 bananas
arriving the next month.
The constraints would look like (`Q`

for `Quantity`

)

0 <= Q_{Dole, Jan} = 100 0 <= Q_{Dole, Feb} = Q_{Dole, Jan} - 200 + z_{Dole, Jan/Feb} 0 <= Q_{Dole, Mar} = Q_{Dole, Feb} + 500 + z_{Dole, Feb/Mar} 0 <= Q_{Dole, Apr} = Q_{Dole, Mar} - Order_{1, Dole} + z_{Dole, Mar/Apr}

Assume the safety floor values are all zero.
The value of `z_{Dole, Jan/Feb}`

is forced to be 100,
but then the order of 500 the next month
makes the state variable in March be 500!
This means that the solver will think it has 500 Dole bananas
to use to fulfill Order 1, when 100 of those are faked.
Worse, it will be harder to trace where those fake bananas came from,
since it was a slack variable a few months earlier in the model.
The impact of correcting the defect
persisted beyond the period in which the deficit was present.

By contrast if the slack variable
is in the safety floor constraint,
then the value of `Q_{Dole, Feb}`

will be -100,
and when it comes time to fulfill Order 1,
we will only have 400 Dole bananas to pull from.
This is the right thing to do because
it’s the risk-averse approach.
We can’t know whether the 100 banana deficit
corresponds to a real order
(which might be pushed back a month)
or a data error,
so it’s better to debit it from the future 500 banana credit
and make the solver compensate
by using Chiquitas to partially fulfill Order 1,
rather than promise to dole out phantom Doles.

Pushing the slack variable to the safety floor constraint achieves this, and it further allows you to easily connect slack variable usage alerts to the orders they impact, since they have to be in the dynamics constraints in adjacent time periods.