In mathematical optimization theory, the simplex algorithm of Dantzig is the fundamental technique for numerical solution of the linear programming problem. That is, given a collection of linear inequalities on a number n of real variables, it provides a practical way to find the optimal solution with respect to a fixed linear functional. Some further details are given on the page for linear programming.

In geometric terms we are considering a closed convex polytope P, defined by intersecting a number of half-spaces in n-dimensional Euclidean space, which lie to one side of a hyperplane. The objective is to maximise a linear functional L; if we consider the hyperplanes H(c) defined by L(x) = c as c increases, these form a parallel family. If the problem is well-posed, we want to find the largest value of C such that H(c) intersects P (if there is no such largest value of c, this isn't a reasonable question for optimization as it stands). In this case we can show that the optimum value of c is attained, on the boundary of P. Methods for finding this optimum point on P have a choice of improving a possible point by moving through the interior of P (so-called interior methods), or starting and remaining on the boundary.

The simplex algorithm falls into the latter class of method. The idea is to move along the facets of P in search of the optimum, from point to point. Note that the optimum cannot occur at a mid-point, for example in the middle of an edge lying as a line segment on the boundary of P, unless the whole relevant 'face' is parallel to H. Therefore it is possible to look solely at moves skirting the edge of a simplex, ignoring the interior. The algorithm specifies how this is to be done.

In studies of computational complexity of algorithms, the simplex algorithm was a puzzling example, being remarkably efficient in practice, while having no polynomial time worst-case complexity implementation. In fact for a some time the linear programming problem was not known whether it was NP-complete or polynomial time solvable.

The first worst-case polynomial-time algorithm for the linear programming problem was proposed by Leonid Khachiyan in 1979. It was based on the ellipsoid method in nonlinear optimization by Naum Shor, which is the generalization of the ellipsoid method in convex optimization by Arkadi Nemirovski, a 2003 John von Neumann Theory Prize winner, and D. Yudin. Khachiyan's algorithm was primarily of theoretical interest. In 1984, N. Karmarkar proposed the Projective method with much better computational complexity. This spurred further research, leading to algorithms that even outperform the simplex algorithm in some important special cases, e.g., for large sparse matrices.