Solve the linear program maximize x + y st 1 + x > 0 1 + y > 0 1 - x - y > 0 The fact that only the third dual variable is nonzero means that only the third constraint is active at the solution. Notice that, unlike simplex, interior point algorithms do not necessarily find a basic solution when the problem is dual degenerate (ie, when an entire face of the feasible region has the same value). >> [val,p,d]=iqp([],[1;1],[1 0;0 1;-1 -1],[1;1;1]) iter 1, pstep 1, dstep 1, target 0.5, actual 0.277778 iter 2, pstep 1, dstep 1, target 0.0694444, actual 0.0694444 iter 3, pstep 1, dstep 1, target 0.00868056, actual 0.00868056 iter 4, pstep 1, dstep 1, target 0.000542535, actual 0.000542535 iter 5, pstep 1, dstep 1, target 1.69542e-05, actual 1.69542e-05 iter 6, pstep 1, dstep 1, target 2.6491e-07, actual 2.6491e-07 iter 7, pstep 1, dstep 1, target 2.06961e-09, actual 2.06953e-09 val = 1.0000 p = 0.5010 0.4990 d = 0.0000 0.0000 1.0000 Now solve the quadratic program maximize x + y - 5 x^2 - 5 y^2 st 1 + x > 0 1 + y > 0 1 - x - y > 0 Since no constraints are active at the optimum, all dual variables are zero, and the solution is obtained by setting the derivative of the objective function to zero. >> [val,p,d]=iqp([10 0;0 10],[1;1],[1 0;0 1;-1 -1],[1;1;1]) iter 1, pstep 1, dstep 1, target 0.5, actual 0.488166 iter 2, pstep 1, dstep 1, target 0.122041, actual 0.12391 iter 3, pstep 1, dstep 1, target 0.0154887, actual 0.0156948 iter 4, pstep 1, dstep 1, target 0.000980922, actual 0.000984559 iter 5, pstep 1, dstep 1, target 3.07675e-05, actual 3.07766e-05 iter 6, pstep 1, dstep 1, target 4.80884e-07, actual 4.80891e-07 iter 7, pstep 1, dstep 1, target 3.75696e-09, actual 3.75697e-09 val = 0.1000 p = 0.1000 0.1000 d = 1.0e-08 * 0.3415 0.3415 0.4696 Now let's solve a larger problem to get some timing info. Build the transition matrix for an MDP with n states in a line, each able to go left or right, with the ends of the line being goal states. First for n=3, so we can print the resulting matrix: >> n = 3; >> a = [spdiags([-ones(n,1) ones(n,1)], [0;1], n, n); ... spdiags([-ones(n,1) ones(n,1)], [0;-1], n, n)]; >> full(a) ans = -1 1 0 0 -1 1 0 0 -1 -1 0 0 1 -1 0 0 1 -1 Now we solve for n=100 (with all edges having unit cost). >> n = 100; >> a = [spdiags([-ones(n,1) ones(n,1)], [0;1], n, n); ... spdiags([-ones(n,1) ones(n,1)], [0;-1], n, n)]; >> tic; [val,p,d] = iqp([],ones(n,1),a,ones(2*n,1),[]); toc; elapsed_time = 1.1328 A plot shows that we do in fact have the correct answer, namely min(x,101-x). Now solve the same problem with the built-in simplex solver: >> tic; p = lp(-ones(n,1),-full(a),ones(2*n,1)); toc; elapsed_time = 87.4354 Admittedly, this is not a fair comparison, since the built-in LP solver can't take advantage of a's sparsity. So try iqp again with the non-sparse version of the problem: >> tic; [val,p,d] = iqp([],ones(n,1),full(a),ones(2*n,1),[]); toc; elapsed_time = 34.1413 Finally, to get an idea of the growth rate, we can try the same (non-sparse) problem for n=50,100,200. The times for iqp are 3.4, 34, and 340, while the times for Matlab's simplex are 10, 87, and 1300; so, it looks like the growth rate for simplex is much worse.