Home/Blog/Programming/An inroad into linear programming
Home/Blog/Programming/An inroad into linear programming

An inroad into linear programming

17 min read
May 12, 2023

Become a Software Engineer in Months, Not Years

From your first line of code, to your first day on the job — Educative has you covered. Join 2M+ developers learning in-demand programming skills.

When faced with a complex problem to solve, the first thing we do is we take it apart and try to get a feel for the underlying structure. With a little practice and a little luck, insights into the structure of the problem enable us to apply a well known algorithmic technique to it and come out on the other end sailing smoothly.

So it’s important that our toolbox of problem solving skills is well rounded, and we are equipped with diverse knowledge about how to handle a problem that looks like it fits a pattern. This blog discusses a technique called linear programming that solves optimization problems and is nothing short of spectacular.

Optimization problems#

An optimization problem is one where something is being minimized or maximized. Here are some examples:

  • Finding a path of the shortest length between two places.
  • Finding the maximum flow in a water supply network.
  • Minimizing the cost of transporting goods in a network.
  • Finding the maximum profit possible for a company, in a setting subject to given constraints.

These examples are underspecified in that the constraints that apply to the problem aren’t clearly stated. In real-life scenarios, optimization problems can have literally thousands of constraints.

Linear programming#

Many optimization problems can be expressed precisely and cleanly as a mathematical model called a linear program, or LP for short.

Note: Once a problem is formulated as an LP, it can be solved using one of the many well-known algorithms designed to solve linear programs.

Main ingredients of an LP#

The quantity being minimized or maximized is written out as a math function called the linear objective function.

  1. Linear objective function: A linear objective function is a linear function with some variables. Suppose x1x_1 and x2x_2 are two unknown quantities. Here’s an example of a linear objective function with the unknowns x1x_1 and x2x_2:

    2x1+3x22 x_1 + 3 x_2

    The qualifier ‘linear’ means the power of each variable is one.

  2. Linear constraints: Each problem constraint is written out, typically, as a linear inequality or equality. For example,

5x1+3x235x_1 + 3x_2 \leq 3

A linear program expresses an optimization problem in terms of a linear objective function that needs to be maximized or minimized subject to linear constraints.

Let’s look at a short example to understand how to represent a problem mathematically as a linear program.

Example#

Suppose a small business manufactures two liquid chemicals P1\text{P1} and P2\text{P2}. Also, suppose the following facts:

  • It earns a profit of $1010 per month for each gallon of P1\text{P1} produced.
  • It earns a profit of $2020 per month for each gallon of P2\text{P2} produced.
  • It doesn’t have the physical capacity to produce more than 500500 gallons.

The goal is to maximize the company’s profit.

Let’s say, we use two variables x1x_1 and x2x_2 so that

x1= Number of gallons of P1 produced per monthx2= Number of gallons of P2 produced per month\begin{align*} x_1 &= \text{ Number of gallons of P1 produced per month}\\ x_2 &= \text{ Number of gallons of P2 produced per month} \end{align*}

Question 1

The objective function for the problem is 10x1+20x210x_1 + 20x_2. Is it apparent why?

Show Answer
Question 2

The following two constraints apply to this problem. Is it apparent why?

x10x20\begin{align*} x_1 &\geq 0\\ x_2 &\geq 0 \end{align*}

Show Answer
Question 3

Is there any other information in the problem statement that can be expressed as a linear constraint?

Show Answer

Putting together our answers to the question above, we can express this problem concisely in the following form. This is an example of a linear program:

Maximize 10x1+20x2Subject to x1+x2500x10x20\begin{align*} &\text{Maximize }&10 &x_1 + 20 x_2 \\ &\text{Subject to }& x_1 &+ x_2 \leq 500\\ &&x_1 &\geq 0\\ &&x_2 &\geq 0\\ \end{align*}

The above linear program would be read as “Maximize 10x1+20x210x_1+20x_2 subject to the constraints.” In other words, maximize the profit subject to the given constraints.

Our goal then is to find values of x1x_1 and x2x_2 for which the value of the objective function is maximized and all problem constraints are satisfied.

Each of the three constraints is satisfied for many values of x1x_1 and x2x_2. These are called the solutions to the constraint. For example, the values x1=20x_1=20 and x2=10x_2=10 constitute a solution to all three constraints because they’re both non-negative and their sum is at most 500500.

When working with two or three variables, it’s helpful to get a geometric view of a linear program. The solutions to each of the above constraints corresponds to a half-plane, as shown by the shaded areas below against each constraint. A half-plane, as the name suggests, includes the points on a line and the points that fall to one of its sides.

Set of points satisfying the shown inequality
Set of points satisfying the shown inequality
1 of 3

The set of values taken by variables that satisfy all the constraints of a linear program is called a feasible solution to the linear program.

Geometrically, the feasible solution to the given linear program corresponds to the following shaded polygon resulting from the intersection of the three half-planes:

Intersection of the three half-planes
Intersection of the three half-planes

Now a theorem gives us this important insight:

Given a feasible region of a linear program, there’s a corner of the feasible region where the objective function of the linear program is maximized (or minimized).

Take a look at the dotted graph of the objective function as it takes different values for points in the feasible region, taking its maximum value at the corner of the feasible region.

The objective function takes the same value along the dotted lines
The objective function takes the same value along the dotted lines

The value taken by an objective function on a particular solution is called an objective value.

In a short example, such as this one, it’s easy to find the solution algebraically, or by taking the geometric view and looking for a corner of the feasible region where the objective value is optimized. But in reality, a linear program can have many variables and constraints. The geometric view in two or three dimensions is helpful for building our intuition, but we need good tools to solve bigger problems.

Let’s see what a linear program in its standard form would look like.

The standard form of an LP#

A linear program in nn unknowns, say x1,x2,,xnx_1, x_2, \ldots , x_n, is expressed in the standard form with these characteristics:

  • It is written as a maximization of a linear objective function in nn variables of the form

    c1x1+c2x2++cnxnc_1 x_1 + c_2 x_2 + \cdots + c_n x_n where c1,c2,,cnc_1, c_2, \ldots , c_n are constants.

  • It has mm linear constraints of the type \mathbf{\leq} so that the ithi^{th} constraint looks like
    ai1x1+ai2x2++ainxnbia_{i1} x_1 + a_{i2} x_2 + \cdots + a_{in} x_n \leq b_i where bib_i, ai1,,aina_{i1}, \ldots , a_{in} are all constants.

  • It’s also subject to non-negativity constraints of the kind xi0x_i \geq 0, for each variable xix_i.

So the general form is precisely the following:

Maximize   c1x1+c2x2++cnxnSubject to a11x1+a12x2++a1nxnb1a21x1+a22x2++a2nxnb2am1x1+am2x2++amnxnbmx10x20xn0\begin{align*} &\text{Maximize } &\ \ c_1 x_1 + c_2 x_2 &+ \cdots + c_n x_n \\ &\text{Subject to }& a_{11} x_1 + a_{12} x_2 &+ \cdots + a_{1n} x_n \leq b_1\\ &&a_{21} x_1 + a_{22} x_2 &+ \cdots + a_{2n} x_n \leq b_2\\ &&\vdots &\\ &&a_{m1} x_1 + a_{m2} x_2 & + \cdots + a_{mn} x_n \leq b_m\\ &&x_1 &\geq 0 \\ &&x_2 &\geq 0 \\ &&\vdots &\\ &&x_n &\geq 0 \\ \end{align*}

This is written out more compactly in the matrix form as:

Maximize cx  Subject to Axbx0\begin{align*} &\text{Maximize } & \mathbf{c\cdot x}\qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \\ &\text{Subject to } & \mathbf{Ax} \leq \mathbf{b} \qquad \qquad \qquad \qquad \qquad \qquad\\ &\qquad &\mathbf{x} \geq 0 \qquad \qquad\qquad \qquad \qquad \qquad \end{align*}

where

A=(a11a12a1na21a22a2nam1am2amn),x=(x1x2xn),b=(b1b2bm),c=(c1c2cn)\mathbf{A}=\begin{pmatrix} a_{11}& a_{12} &\cdots &a_{1n}\\ a_{21}& a_{22} &\cdots &a_{2n}\\ &&\vdots &\\ a_{m1}& a_{m2} &\cdots &a_{mn}\\ \end{pmatrix}, \mathbf{x}= \begin{pmatrix} x_{1}\\ x_{2}\\ \vdots \\ x_{n}\\ \end{pmatrix}, \mathbf{b}= \begin{pmatrix} b_{1}\\ b_{2}\\ \vdots \\ b_{m}\\ \end{pmatrix}, \mathbf{c}= \begin{pmatrix} c_{1}\\ c_{2}\\ \vdots \\ c_{n}\\ \end{pmatrix}

So, the dot product of vectors c\mathbf{c} and x\mathbf{x} gets us the objective function. Similarly, multiplying the first row of matrix A\mathbf{A} with the column vector x\mathbf{x} gets us the first constraint. And so on.

Note: The standard form of a linear program defined here also applies to minimization problems.

There is an uncomplicated systematic way of converting an arbitrary linear program to its standard form. Here are some hints for the readers who are interested:

Unboundedness and infeasibility#

Depending on the constraints, the feasible region may also be unbounded (not closed off by boundaries on all sides.) For instance, if the only constraints in a linear program are x10x_1 \geq 0 and x20x_2 \geq 0, then the feasible region is unbounded. In a case like that, an objective value can continue to increase in one direction forever, and the optimum value simply does not exist.

A linear program may also happen to have constraints that are not mutually satisfied. For instance, there’s no way to satisfy both of the constraints x110x_1 \leq 10 and x120x_1 \geq 20. A linear program such as this with no solution is said to be infeasible.

Note: With nn variables, solutions satisfying each constraint forms a half-space (a generalization of a half-plane) in nn dimensions. And the feasible region is a convexA region is convex if the line segment between any two points in the region is contained within that region. polytope, which is a generalization of a polygon in nn dimensions.

Solving a linear program#

Once we have modeled an optimization problem as a linear program, we can solve it.

There are many algorithms for solving a linear program. The simplex algorithm and the interior-point algorithms are widely used in practice. (There are others of a greater theoretical interest.)

The simplex algorithm works by iteratively going along the edges of the feasible region toward a corner that yields an improved objective value. The interior-point method, on the other hand, moves through the interior of the feasible region.

Implementations of well-known linear programming algorithms and their variations are available for free as well as in the form of paid-use libraries called solvers.

Duality#

A really important idea in linear programming is that every LP (the primal LP) can be used for obtaining a different LP called its dual. The dual can be found in a systematic way.

  • For each variable in the primal LP, there’s a constraint in the dual.
  • For each constraint in the primal LP, there’s a variable in the dual.
  • If one of these is a maximization, the other is a minimization.
  • Direction of inequality is flipped for the problem constraints, but non-negativity constraints still apply.

Here’s a standard way these two are expressed. Note that AT\mathbf{A^T} in the dual form below is the transpose of A\mathbf{A}.

Primal

Maximize cx  Subject to Axbx0\begin{align*} &\text{Maximize } & \mathbf{c\cdot x}\qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \\ &\text{Subject to } & \mathbf{Ax} \leq \mathbf{b} \qquad \qquad \qquad \qquad \qquad \qquad\\ &\qquad &\mathbf{x} \geq 0 \qquad \qquad\qquad \qquad \qquad \qquad \end{align*}

Dual

Minimize by   Subject to ATycy0\begin{align*} &\text{Minimize } & \mathbf{b\cdot y}\ \qquad\qquad \qquad \qquad \qquad \qquad \quad \ \ \\ &\text{Subject to } & \mathbf{A^Ty} \geq \mathbf{c} \qquad \qquad \qquad \qquad \qquad \qquad\\ &\qquad &\mathbf{y} \geq 0 \qquad \qquad\qquad \qquad \qquad \qquad \end{align*}

Interestingly, the dual of a dual is the primal again.

So for a pair of primal and dual LPs, we are looking at two closely related problems — a maximization and a minimization. The primal-dual relationship is a rich source of contributions towards developments in algorithmics where one problem in the primal-dual pair is solved using an algorithm for the other.

Two core insights into the primal-dual relationship come from the weak-duality and strong-duality theorems.

  • Weak duality: Suppose the primal LP is a maximization problem with an objective value p\mathbf{p}, then p\mathbf{p} is bounded from above by the objective value of the dual d\mathbf{d}.

    (Maximization)pd(Minimization)\text{(Maximization)} \qquad \qquad \mathbf{p} \leq \mathbf{d} \qquad \qquad \text{(Minimization)}

  • Strong duality: If an optimum objective value exists for any one of the primal-dual LP problems, it is guaranteed to exist for the other. And not just that, but in a case like this, the optimum objective values for the two problems are exactly the same.

The maximum-flow and minimum-cut problems are famous for illustrating how linear programming duality is effective for arriving simultaneously at solutions for both problems. Approximation algorithms also make effective use of the primal-dual LP relationship for solving other hard problems.

Use in approximation algorithms#

A linear program with additional constraints that require variables to be integers is called an integer linear programs (ILP). Even though, polynomial-time algorithms for solving LP problems are well-known, integer linear programming is NP-hard and polynomial-time algorithms for it aren’t known to exist. One way to deal with problems like these efficiently is to devise approximation algorithms for them.

An approximation algorithm is an algorithm that gives a ‘good enough’ solution (and not necessarily the best solution,) but that solution is accompanied by some kind of mathematical guarantee of how good it is.

  • One of the ways an approximation algorithm solves an ILP is by first relaxing (which means removing) the integer constraints, and finding a fractional solution to the linear program. This fractional solution is rounded to get an approximate integer solution. Then mathsy arguments are built around the two solutions to prove that the approximate integer solution is ‘good enough’.
  • Approximation algorithms are also designed by tapping into the primal-dual relationship between two problems.

In short, linear programming is immensely useful for increasing efficiency and cost-effectiveness across a variety of systems and processes. For this reason, it is used heavily in disciplines like agriculture, finance, operation research, transport, military, engineering, and technology. It is also applied beautifully in areas of active theoretical research.


If you’d like to learn how to use the Python library PuLP for linear programming, the following course on the Educative platform can help:

Cover
Master Linear Programming in Python with PuLP

We make optimization decisions every day. To improve the performance of your applications, computers need to do the same. This course will help you optimize computing functions through the open-source Python library PuLP. In this course, you’ll learn about linear programming, an optimization approach that maximizes or minimizes functions through defined algebra, and the PuLP library along with its various optimizers available via APIs. You’ll identify the variables and constraints of computing functions and define the optimization goals. You’ll then learn and practice common linear programming algorithms. Finally, you’ll be introduced to potential applications in the real world. By the end of the course, you will be able to optimize computing functions using Python and PuLP.

1hr
Beginner
5 Playgrounds
2 Quizzes

 
Join 2.5 million developers at
Explore the catalog

Free Resources