The present work is a part of the ongoing research project at the Department of Industrial & Manufacturing Systems Engineering at Louisiana State University, sponsored by Louisiana Education Quality Support Funds (LEQSF).
The problem of assigning electric power repair depots to various locations in a damaged area assumes critical importance during emergency situations such as natural disasters. This issue has to be addressed taking into account the damage levels, priorities, and the consequent demand for various resources among different cells in the area, and also the capacity considerations for the depots.
This is a typical depot allocation and vehicle routing problem with several other prevailing factors such as repair time and time windows for repair, crew and resource scheduling with resources' precedence contraints and demands for same resource from multiple locations (cells in a grid mapping system). Considerable amount of literature exists in the area of location research, it may be mentioned that the problem of simultaneous location of depots with varied capacities, and their assignment to customers with different demands has not been dealt with. So all these constraints make the problem immediately complex and the mathematical structure tends to be forbidden.
For the purpose of power restoration, a region may be divided into a number of cells. Each cell may be differently affected by the disaster, and the damage levels may be assessed to ascertain the amount of resources required for power restoration in each cell. In a similar vein, the available depots may also differ in terms of their capacities with respect to various resources. The problem, then, is to determine the location of the available depots and also, to ascertain the amount of resource transported from different depots to various cells, so as to minimize the total transportation cost. It is implicitly assumed that this objective also enables restoration of power in minimal time.
As mentioned earlier, the demand arising at various cells is dependent on the level of damage and priorities which are different for each cell. This, coupled with the fact that all the depots may not be identical in terms of their capacity, leads to formulation of the depot allocation problem as a mixed integer quadratic programming problem, which has been developed during the research work.
The mixed integer quadratic Programming model has a term involving the product of an integer (0-1) variable and a continuous variable. The constraints essentially reflect the demand requirement at different cells, capacity of depots for various resources, and the requirement that each depot may be located at only one cell and each cell may in turn contain only one depot. The objective function, as mentioned earlier, is a cost function which is to be minimized.
The above problem has been reformulated as an equivalent mixed integer linear programming problem. In the latter, the objective function contained only the real variables. The modification has been effected by introducing a new variable to replace the product of the integer and real variables in the previous formulation, and adding another set of constraints which impose the equivalence of these two terms, by the well known "big M" method. To have an idea of the problem size, for a problem involving with m resources, N depots, and n cells in the region, the total number of variables is Nn(1+ m + mn), of which there are Nn number of integer variables. The number of constraints is given by (n + N + mn + 2Nmn).
The datasets were randomly generated and the solution is obtained using the mixed integer programming solver in IBM's Optimization Subroutine Library (OSL) and GAMS software package on a RISC 6000 (Model 590) workstation. The optimization problem is solved for various dimensions of the problem in terms of the number of cells in the region, the number of depots considered, and the number of resources used.
Our driver program is a relatively straight-forward one. It reads in model data in the MPS format from a file (EKKMPS). Then, the read-in model is scaled (EKKSCAL) to avoid any numerical instabilities that may be encountered during the solve process. This is followed by a "presolve" phase (EKKPRSL) of the resulting scaled matrix which attempts to remove any redundancies by eliminating rows and fixing variables. This phase results in transforming the model into a new set of rows and columns. The crash routine (EKKCRSH) is then invoked to identify a starting basis for solving the LP relaxation of the MIP model. OSL's primal simplex solver (EKKSSLV) is then called to obtain an optimal solution to the relaxed problem. If the resulting solution does not satisfy the integrality requirements, then OSL's preprocessor EKKMPRE is called to further investigate the underlying polytope and generate feasible solutions quickly. It is after this phase that the branch-bound-solver in OSL (EKKMSLV) is invoked to obtain an optimal solution to the problem. After obtaining an optimal integer solution, the model is "postsolved" (EKKPSSL) to transform the model data back to the original variables and rows.
While the OSL functions called from our driver program remained the same for all our test problems, our experimentation involved manipulating various control variables to influence the solution procedure. Our computational experience shows that OSL entails immense flexibility in terms of providing the user with various user-desired options, and the performance of the software was very satisfactory in terms of the computational run times (see table below).
Problem Number of Number of Number of CPU Time Rows Columns 0-1 Variables (in seconds)
a1 327 312 12 2.45 b1 405 396 18 8.30 a2 669 648 18 15.06 b2 683 672 24 28.52 a3 1131 1104 24 61.08 b3 1033 1020 30 104.60 b4 1455 1440 36 259.90 a4 1713 1680 30 200.20 b5 1949 1932 42 538.16 a5 2415 2376 36 556.01 b6 2515 2496 48 1028.54 a6 3237 3192 42 1119.29
The environmental impact of oil spills can be dramatic as evidenced by just one recent spill. The Exxon Valdez spill occurred in 1989 and Exxon has paid over $4 billion for clean up and ecological damage since then (Hann, 1992). Recently, a federal jury found Exxon liable for an additional $5 billion to compensate fishermen and others who claimed economic losses from environmental damage (U.S. News and World Report, 1994). Over 60% of the crude oil imported into the U.S. travels through Texas Gulf region (Rainey, 1992), and during 1978-1992 tanker spills surpassed those in all other U.S. regions (Schneider, 1992). Spills are often associated with tanker traffic but can originate from numerous sources including oil well blow outs, aging pipelines, and underwater seepage (Geyer, 1981). Texas led the nation in oil spills in 1990 (Rainey, 1992) and remains especially vulnerable.
Effective clean up can be enabled by a hierarchial approach, consisting of strategic and tactical levels. Strategic decisions deal with investment and contingency planning strategies. The Oil Pollution Act of 1990 (OPA-90) requires an Area Contingency Plan to enable an effective and timely response to any oil spill likely to occur (U.S. Coast Guard, 1992). Tactical decisions, which are made after a spill has occurred, prescribe response systems necessary to meet the time-phased clean up requirements stipulated by law.
Currently, four methods are used to clean up oil from the sea: mechanical containment, chemicals (e.g., dispersants), bioremediation, and burning. A number of factors determine the efficacy of a method, including the amount and type of oil; the ecological, oceanographic, and climatologic conditions; and the season. An effective method, or combination of methods, must be prescribed to deal with the characteristics of a given spill. Clean up capability is provided by sy stems composed of components (e.g., equipments, materials, and personnel) at staging areas for deployment on scene.
Our research (Srinivasa and Wilhelm, 1994; Wilhelm et al, 1994; Wilhelm and Srinivasa, 1994) formulated the strategic and tactical decision problems as integer programs and developed effective solution methods for both. The models incorporate practical features, accounting, for example, for component availability and degradation of system capabilities over time as the oil weathers in the sea. Both successfully solved large-scale test problems that represent application to the Galveston Bay Area (Wilhelm et al, 1993).
The strategic, area-wide contingency planning model (Wilhelm et al, 1994) accepts as inputs the set of risk points and the likely spill scenarios and response requirements for each, the sites of existing storage locations and the inventory of components at each, and potential sites for new storage locations. It prescribes a minimum total cost plan to build new storage locations and/or expand existing ones and to purchase new components and pre-position them, and a contingency plan which determines what systems should be composed to enable an effective time-phased response for each likely spill scenario. A family of heuristics based on linear programming (LP) relaxation (Wilhelm et al, 1994) resolve the strategic model, prescribing an area-wide contingency plan. Computational tests indicate that four heuristics are quite effective, prescribing solutions for each of 10 test cases within 1.41 percent of optimum and within a few minutes runtime.
The objective of the strategic model is to minimize the present worth of total cost, including the fixed investment for opening new warehouses or expanding existing ones, the variable cost for additional storage space at new and expanded warehouses, the cost of purchasing and maintaining new components, the expense of relocating existing components, and the cost of composing and operating response systems. Five types of decision variables are involved: Z variables determine if a new warehouse should be opened at each potential new location or if an exiting one should be expanded; F variables prescribe the storage space for each new and expanded warehouse; R variables detail the relocation of existing components; X variables specify what new components should be purchased and where each should be stored; and Y variables give the contingency plan, specifying the number of response systems and their deployment times for each likely spill scenario. The model is composed of four types of constraints in addition to those that specify variables: F variables are real; Z variables are binary; and R, X, and Y variables are non-negative integers. The first type of constraint assures that capacity is provide d at a location only if a new warehouse is opened or an existing one is expanded there. The second type sets the capacity of each new or expanded warehouse large enough to accommodate the components stored there. The third makes sure that response systems use no more than the number of components made available either from existing inventories or from purchases. Finally, the fourth type of constraint assures that response systems will provide required, time-phased response to spills of all likely scenarios. The large-scale test problems that represent application to the Galveston Bay Area entail some 1869 variables and 3264 constraints. The solution procedure consists of various pre-processing methods, two heuristics and an optimizing approach. One of the pre-processing methods exploits Generalized Upper Bound (GUB) constraints in the model to tighten the formulation. This is achieved by iteratively solving the Linear Programs (LP) obtained by replacing the original objective function with the LHS of each GUB constraint. The LPs were solved using OSL's LP solver EKKSSLV. Other methods to tighten the formulation like obtaining bounds on the number of response systems necessary also relied on EKKSSLV. Initial efforts to obtain feasible solutions using a straight forward call of EKKMSLV were discouraging. For some test cases, we failed to obtain a feasible solution even after 10 hours of run time. Hence, heuristics were devised with the goal of obtaining 'good' feasible solutions quickly. The first heuristic is totally based on LP relaxation and obtains feasible solutions by judiciously setting lower bounds on certain variables. The choice of which variables to set relied on a property, 'response system splitting.' Due to the degradation of response systems' capabilities over time once a system is deployed, the LP solution typically prescribes splitting whereby the same response system is deployed at different time points, each time at a fractional value, but such that all fractional values sum exactly to an integer. Splitting means that the heuristic can set the appropriate lower bounds on variables representing the deployment of a system at the time points involved without violating feasibility vis-a-vis resource availability, because the new bounds merely sum to the total response prescribed by the LP relaxation. The second heuristic combines LP relaxation with partial enumeration to obtain a feasible solution. The procedure completes one iteration of Heuristic I to tighten variable lower bounds and then invokes OSL's integer solver EKKMSLV. Both the heuristics have been very successful in obtaining 'good' feasible solutions quickly. This example shows how efficient pre-processing methods for general integer variables can be devised by exploiting structures in the problem and using the solution capabilities of OSL. Furthermore, the performance of EKKMSLV can be dramatically improved by exploiting an in-depth understanding of the problem being addressed. Inputs to the tactical decision problem (Srinivasa and Wilhelm, 1994; Wilhelm and Srinivasa, 1994) include the location of the spill and its characteristics, the time-phased response required, the inventory of components at each storage location, and the sites and capacities of potential staging areas. The model prescribes a minimum time response, guarding against environmental damage by dispatching components from their storage locations to staging areas where appropriate systems can be composed for timely deployment on scene. Our solution approach includes a unique column generation method to compose systems and several pre-processing methods to facilitate calculations by bounding decision variables. A heuristic (Wilhelm and Srinivasa, 1994) based on LP relaxation found solutions for 12 test cases within 2.9 percent of optimum in less than 1.1 seconds (on average). A new optimizing method (Srinivasa and Wilhelm, 1994), which based pre-processing methods and generation of families of facets on an aggregated model, solved each of 6 cases in less than 15 minutes and each of 12 test cases in less than 1.75 hours.
The objective of the tactical decision problem is to minimize total response time, reflecting the consensus that, once a spill has occurred, it is most important to respond as quickly as possible to ameliorate environmental damage. Decision variables prescribe the number of response systems of each type that must be deployed at each time point. The model consists of three types of constraints in addition to the non-negative integer requirements on the decision variables. The first type of constraint assures that response systems provide the required, time-phased response. The second type assures that deployment of response systems is limited by the components available, and the third type imposes restrictions on composing response systems resulting from the limited capacities of staging areas. The large-scale test problems that represent application to the Galveston Bay Area entail 180 constraints and the column generation method generated some 350 columns, specifying the components that compose each system, the storage location of each component, and the staging area where each system is composed. The optimizing approach is based on pre-processing methods and the generation of families of facets of the polytope associated with an aggregated model of the TDP. The aggregated model which is also a general integer program but of substantially smaller size (the model has about 75-85% fewer variables), is solved using EKKMSLV. However, instead of assuring the integrality of decision variables using EKKMPS, we imposed integrality by using EKKIMDL. This was done to take advantage of the flexibility of EKKIMDL in defining SOSs by placing each general integer variable in its own set. This allowed us to define priorities for each set (i.e., each general integer variable) based on the clean up capability and time availability of the response system represented by that variable. Such a scheme for prioritizing the decision variables provided for a convenient branching and resulted in a significant improvement in the performance of EKKMSLV.
These methods can prescribe area-wide contingency plans and tactical responses for actual cases in reasonable time. This capability should enable dramatically improved oil spill clean up response.
This material is based on work supported by the Texas Advanced Technology Program under Grant Number 999903-282.
Linear programming under uncertainty, or more generally, stochastic programming, is a field reborn in the new computing frontier of powerful processors networked together. Software to test approaches to modeling and solving stochastic programs is under development at IBM's Thomas J. Watson Research Center, and has proven of crucial value in early applications of multiperiod risk management in the finance industry. As Roger Wets said in his address in honor of George Dantzig at the recent Mathematical Programming Symposium in Ann Arbor, from the earliest days of linear programming Dantzig held that stochastic programming was "the real problem". Real data is always uncertain. Real decisions always have uncertain outcomes. The real problem always involves risk. In many real world situations, there are various causes of uncertainty - interest rates may change; demand may increase or decrease; tax laws may change; a merger may or may not occur. A "scenario" is the term used to refer to the situation we get when each of the causes of uncertainty is fixed to a single value. Clearly some scenarios are more likely than others! An example from financial portfolio management illustrates the issues. For a given scenario of financial market returns, one could find a portfolio composition that maximizes shareholders equity subject to regulatory constraints. But imagine yourself as the CEO of a large insurance company analyzing 100 different scenarios of possible financial market returns. Each one comes with a different portfolio recommendation. In one scenario, it may be best to invest in stocks. In another, to invest in bonds. In yet another, it may be best to get out of town! But you have to make a decision now, while everything is still uncertain. The CEO in our example would be looking for solutions that could be implemented today and that do the best job with respect to all the risks faced by the company. The problem, of course, is that scenario analysis gives solutions that would be optimal under that scenario, only. Scenario solutions are prescient: they know exactly what is going to happen tomorrow and the day after that. Furthermore, the scenario solutions don't agree on what to do today. Solutions for one scenario can turn out to be disastrous under another scenario. And solutions that minimize risk, taking into account the full spectrum of possible scenarios, sometimes don't show up as optimal in any one of the scenarios. Stochastic programming is a methodology for risk management. To help the CEO analyze the risk of a decision made today, two things need to be done. First, the scenarios must be combined to give a stochastic multistage distribution, so that solutions made in one time stage will have to hedge against the uncertainty in the next stage. Second, the risk of a decision must be formed by computing costs or penalties under each scenario. For example, when a target is missed or an expensive recourse action must be undertaken, the penalty or cost is incorporated into the risk function. Stochastic programming seeks a solution that minimizes this risk. For a decision maker facing the given distribution of uncertain scenarios and who possesses the given risk profile, such a solution would be the optimal hedge. A single instance of a stochastic program is huge -- typical sizes for practical models run into many hundreds of thousands of rows. Approximation techniques and specialized decomposition algorithms are absolutely necessary for the solution of problems of such enormous scale. Of course, in most situations one does not have a precise idea of one's risk function or the stochastic distribution of uncertain events. In a single decision situation, many different risk objectives would be selected and compared, and many scenarios would be generated and combined into different distributions. Even more than in linear programming, parametric programming and sensitivity analysis forms a crucial part of stochastic programming methodology. Only recently have computers attained the minimal power and performance necessary to support risk management at the level of detail practical for industrial application. Nevertheless, stochastic programming is at a stage now where it can make a serious impact on decision situations. Research software, dubbed SP/OSL, under development by Alan King and colleagues in the Optimization and Statistics Center at IBM's Thomas J. Watson Research Center, implements a stochastic programming interface using OSL routines as solution modules. SP/OSL supports basic stochastic programming activities, such as bundling scenarios and generating stochastic programs with linear or piecewise linear risk functions. It also incorporates decomposition tools for quick solution. The insurance company asset allocation problem discussed above is an actual example from a consulting engagement led by Alan King. The company manages more than $20 billion in assets. A multi-year cash flow model linking portfolio transactions and earnings and underwriting losses to basic accounting measurements, like shareholders equity and internal rate of return, was built by members of IBM consulting practices under the guidance of insurance company experts. Scenarios of asset returns, inflation, and underwriting losses were generated from the company's proprietary forecasting models. The scenarios were bundled by SP/OSL routines, and risks modeled as penalizations above or below company-selected targets. The stochastic programs solved by SP/OSL for the insurance company include many more scenarios than the company's scenario analysis procedure ever did. SP/OSL solves 540-scenario stochastic programs in less than an hour. (A 540-scenario problem has about 350,000 rows.) Furthermore, the exploration of the company's risks and opportunities is far more sophisticated and comprehensive than before. The CEO is now presented with a chart summarizing the output of parametric runs comprising the solutions of over 100 stochastic programs representing different blends of risk measurements and assumptions. An early version of SP/OSL was instrumental in demonstrating the practicality of stochastic programming for the prize-winning Russell-Yasuda Kasai asset/liability management model (see reference 1). SP/OSL is now equipped to handle many types of distributions and to interface with LP modeling languages. SP/OSL resembles OSL, in that it is a library of subroutines that can perform various actions on stochastic program data and models. The decomposition method built into SP/OSL has recently been parallelized (by its designer, Steve Wright) using PVM subroutines. Several universities and research labs are field-testing SP/OSL for use in risk management applications and/or implementation of solution methods, including: University of Geneva, University of Michigan, Georgia Tech, University of California at Davis, University of Texas at Austin, and the University of Cyprus. Applications under development include: long-range energy planning, reservoir management, commercial aircraft arrivals scheduling, and production/inventory management, to name just a few. Reference 2 documents the present state of the software. Interested readers may contact Alan King (firstname.lastname@example.org) for further information.
This work was initiated as a part of an ongoing research project at Texas A&M University, to investigate the use of modern cutting plane methods in solving various complex, real world problems arising in the design of assembly systems. The simplest problem in the class of assembly system design (ASD) problems is that of assembly line balancing (ALB), which has been shown to be NP-hard (Baybars, 1986). Therefore, the more general ASD problems have the misfortune of being classified as "hard" combinatorial problems that are not amenable to any exact optimizing procedures. One ASD problem can be described as follows. Given a set of tasks and their precedence relationships, a set of machines to process these tasks, and a set of stations at which to locate the machines, the problem is to assign each task to a machine, and to locate each machine at an activated station, while minimizing a total cost function consisting of costs for each station activated, each machine located, and the processing of each task, which depends on the machine to which it is assigned. A series of models capturing different additional features such as accommodating parallel machines at a station, and provision of resources were considered. The basic model has five types of constraints. The first type requires each task to be assigned to one machine at one station. Task precedence relationships are enforced by the second type, and the third type requires that the work content at each station be less than the cycle time. The fourth type requires that at most one machine can be assigned to each activated station. The final requirement is that all the variables be binary. The above model yields a 0/1 linear programming problem formulation associated with a positive 0/1 polytope. It is well known that all the non-trivial facets of positive polytopes have their associated normal vectors lying in the non-negative orthant. Apart from exploiting this characteristic of the original polytope, no other attempt was made to study the underlying polyhedral structure, to identify various classes of strong valid inequalities. The overall solution procedure consists of a cutting plane phase followed by a call to OSL's branch-and-bound solver. Our solution approach assumes that the original polytope is well approximated by the intersection of a set of "simpler" polytopes (which are represented by subsets of the original set of constraints), and that there exist effective algorithms for optimizing linear functions over these "simpler" polytopes. While the first assumption ensures that the facets of the "simpler" polytopes tend to represent high-dimensional faces of the original polytope, the second assumption facilitates numerical computation of such facets. The solution procedure is as follows. First, the linear programming relaxation of the ASD problem is solved using OSL's LP solver EKKSSLV. If the resulting solution does not satisfy the integrality constraints, then our cutting plane procedure is invoked, which seeks to identify a facet-defining inequality for one of the "simpler" polytopes (e.g., knapsack polytopes or, polytopes represented by a knapsack and GUB type constraints, in the ASD context), which cuts off the fractional solution from the underlying polytope. With respect to each of these "simpler" polytopes, the cutting plane procedure is guaranteed to identify a facet-defining cutting plane, if one exists. When a cutting plane is identified, then it is appended to the original formulation, and the resulting problem is resolved using the dual simplex algorithm. When no more cutting planes can be identified, the cutting plane phase terminates, and OSL's branch-and-bound solver, EKKMSLV, is invoked. Our cutting plane procedure exploits the polarity correspondences existing between a positive 0/1 polytope and its antiblocker to solve the separation problems associated with each incumbent fractional solution and with each "simpler" polytope. In the ASD context, we restricted our choices for so-called "simpler" polytopes to two types of polytopes, namely, polytopes of knapsack and knapsack-with-GUB problems, for which there exist pseudo-polynomial time dynamic programming algorithms. For each "simpler" polytope under analysis, an attempt is made to construct the convex hull of an affinely independent collection of maximal extreme points, and then to separate the incumbent fractional solution from this convex-hull. A dual version of this separation problem is a linear program which is solved by using EKKSSLV. If the solution to this dual linear program satisfies a prespecified "membership" criterion, then the associated optimizing oracle (e.g., the pseudo-polynomial DP algorithm) is invoked to optimize a linear function (in the direction of the solution vector to the dual program) over the "simpler" polytope. If the resulting maximal extreme point solution satisfies a "separation" criterion, then the previous dual solution defines a facet-defining cutting plane. Otherwise, the new extreme point is added to the list of existing maximal extreme points, and the dual and the oracle optimization steps are carried through until either a facet-defining inequality is identified, or it is demonstrated that the incumbent fractional solution cannot be separated from the underlying "simpler" polytope. There are two key factors that influence the performance of the cutting plane approach outlined above. The first is the ability to generate maximal points easily, and the second is the ability to solve the separation subproblems quickly. Our implementation is based on the rationale that generating maximal extreme points via oracle optimization is computationally burdensome, even though the underlying "positive 0/1" structure allows enumeration of a collection of affinely independent maximal extreme points. The theoretical complexity of such an enumeration procedure is of the same order as that of finding minimal covers. In any case, to avoid invoking the "costly" oracles more frequently, whatever extreme point information gets generated for each polytope is stored in a file so that they can be reused to give a "warm start" to the cutting plane procedure, whenever this particular polytope is considered again for separating some other fractional solution at subsequent passes. For each "simpler" polytope to be analyzed, an auxiliary linear program is set up that corresponds to the dual of the underlying separation problem. In this program, the row vectors correspond to the maximal extreme points of the underlying polytope and whenever a new extreme point is identified, a new row is added to this auxiliary linear program and the problem is resolved using the dual simplex solver EKKSSLV in OSL, which is typically very fast. We implemented the overall solution procedure using OSL on a RISC System/ 6000 Model 550. The cutting plane procedure relied extensively on the LP solver EKKSSLV, along with two specialized dynamic programming algorithms to optimize over knapsack and knapsack-with-GUB polytopes. We used a strong formulation (Pinnoi, 1994) in which the original precedence inequalities represent facets of the ASD polytope under some conditions. Our test bed consisted of some real-life problems as well as some others constructed from standard line balancing problems available in the literature. One sample test problem considers the problem of designing a single-product system for accommodating 94 tasks. The corresponding model has about 1300 binary variables and 800 constraints for which a straight-forward application of EKKMSLV, preceded by a call to EKKMPRE (in the serial environment), required slightly less than an hour to solve. On the contrary, our initial implementation of the cutting plane approach typically added more than 100 cuts but closed only 15% to 20% of the LP-IP gap. This relatively poor performance was apparently caused by the intersection of the knapsack and knapsack-with-GUB polytopes providing a very poor approximation of the ASD polytope, even though the instances are sparse. These results motivated us to explore an alternative way of implementing the cutting plane method. The resulting approach included invoking the cutting plane routines after branching on variables appearing in the GUB constraints that allow at most a certain number of machines to be located at each station. A little analysis indicated that these GUB constraints form the coupling constraints which prevent the ASD polytope from being well approximated by the intersection of individual knapsack and knapsack-with-GUB polytopes. Thus, branching on these variables decouples the constraints, allowing the intersection of the resulting "simpler" polytopes to better represent the resulting convex hull. As a result, the cutting plane approach consistently performed well, closing at least 75% of the LP-IP gap in each of the test problems. For the instance for which a straight-forward application of EKKMPRE followed by EKKMSLV took about an hour, the new approach required only about two and a half minutes to solve. This example illustrates how efficient solution procedures can be devised by combining a strong formulation with the existing solution capabilities in OSL. The available routines in OSL can be extensively exploited to construct effective separation algorithms, which form a key component to all cutting plane methods. We would like to thank the National Science Foundation (NSF) for supporting this work through grant number DDM9114396.