## Tech. Reports on Simulated Annealing & Related Topics

Boston U CS 94-010(1K) 94-010-genetic_search.ps.Z(848K)
abstracts/94-010 ::::::::::::::

Title: On the effectiveness of genetic search in combinatorial optimization Author: Bob Carter and Kihong Park, Computer Science Dept, Boston University Date: November 10, 1994

Abstract:

In this paper, we study the efficacy of genetic algorithms in the context of combinatorial optimization. In particular, we isolate the effects of cross-over, treated as the central component of genetic search. We show that for problems of nontrivial size and difficulty, the contribution of cross-over search is marginal, both synergistically when run in conjunction with mutation and selection, or when run with selection alone, the reference point being the search procedure consisting of just mutation and selection. The latter can be viewed as another manifestation of the Metropolis process. Considering the high computational cost of maintaining a population to facilitate cross-over search, its marginal benefit renders genetic search inferior to its singleton-population counterpart, the Metropolis process, and by extension, simulated annealing. This is further compounded by the fact that many problems arising in practice may inherently require a large number of state transitions for a near-optimal solution to be found, making genetic search infeasible given the high cost of computing a single iteration in the enlarged state-space.

U of California Berkeley, CS csd-86-285(dir)
Pincus, Jonathan. Transistor Sizing.'' UCB//CSD-86-285, February 1986. 102 pages.
ABSTRACT
Several methods of choosing appropriate sizes for transistors in a VLSI schematic to meet a specified delay criteria are considered. Simulated annealing and heuristic techniques are investigated. MOST is a Prolog program which makes use of info rmation provided by the PTA timing analyzer to implement these various approaches. Both MOST and PTA are written entirely in (interpreted) Prolog; nonetheless, performance gains of over 50% as compared to an unsized ciruit can be realized in a few minutes of CPU time. Using a simple RC timing model, heuristics are found to be more efficient than simulated annealing.
ENTRY
April 27, 1993
RETRIEVAL
ocr (in all.ocr); tiff (in {001-102}.tif)

Helsinki U of Tech, CS Dept, Digital Sys Lab B10.ps.Z(121K)
Kari J. Nurmela, Patric R. J. Östergård. Constructing Covering Designs by Simulated Annealing.'' Technical Report B10, Helsinki University of Technology, Digital Systems Laboratory, Espoo, Finland, January 1993.

Keywords: Combinatorial optimization, covering design, simulated annealing

A t-(v,m,k,\lambda) covering design is a pair (X,A), where X is a set of v elements (called points) and A is a multiset of k-subsets of X (called blocks), such that every m-subset of X intersects at least \lambda members of A in at least t points. It is required that v >= k >= t and v >= m >= t. Such a covering design gives an upper bound on C_\lambda(v,m,k,t), the number of blocks in a minimal t-(v,m,k,\lambda ) covering design. In this work it is shown how simulated annealing can be used in searches for covering designs. A cost function and a neighborhood structure needed by the algorithm are presented. The space complexity of the implementation and the time complexity of the cost change calculation are also given. Several options to increase the performance of the algorithm are tested. A performance comparison to local optimization is carried out. Finally, a description how to obtain, compile and use the computer program cover, which has been developed during the research, is given
B10.ps.Z

Helsinki U of Tech, CS Dept, Digital Sys Lab A27.ps.Z(247K)
Kari J. Nurmela. Constructing Combinatorial Designs by Local Search.'' Research Report A27, Helsinki University of Technology, Digital Systems Laboratory, Espoo, Finland, November 1993.

Keywords: Combinatorial designs, combinatorial optimization; great deluge algorithm, record-to-record travel

In this work probabilistic search heuristics are used in search for covering designs, packing designs and t-designs. Each covering design gives an upper bound for C_\lambda (v,m,k,t), the minimum size of a t-(v,m,k,\lambda ) covering design, while each packing design gives a lower bound for D_\lambda (v,m,k,t), the maximum size of a t-(v,m,k,\lambda) packing design, and each t-design establishes the existence of such a design. The designs are constructed using probabilistic combinatorial optimization methods including simulated annealing, tabu search, threshold accepting, great deluge algorithm and record-to-record travel. Implementation issues of these algorithms, such as cost functions and neighborhood structures are discussed. Comparisons of the performance of these algorithms on some designs is carried out. Some elementary search methods including steepest descent algorithm and random walk are included in the comparisons. Finally, the results of the searches are tabulated. The object class hierarchy of the program (written in C++) and brief class descriptions are presented in the appendix
A27.ps.Z

U of California Berkeley, ERL erl-93-88(dir)
Magnussen, Holger; Nossek, Josef A.; Chua, Leon 0. The Learning Problem For Discrete-Time Cellular Neural Networks as a Combinatorial Optimization Problem.'' UCB//ERL-93-88, December 1993. 48 pages.
ABSTRACT
"Global" Learning algorithms for Discrete-Time Cellular Neural Networks (DTCNNs) are a class of learning algorithms where the algorithm designs the trajectory of the network. For binary input patterns, the global learning problem for DTCNNs is a combinatorial optimization problem. Properties of the solution space can be deduced using the available theory on Linear Threshold Logic. Results on the required accuracy of the network parameters are given. The problem of deciding whether a given task can be learned for a DTCNN architecture (feasibility problem) is conjectured to be NP-complete. A cost function is defined that is a measure of the errors in the mapping process from a set of input images onto the desired output images. Simulated Annealing methods are used to minimize this cost function. The algorithm is used to find the parameters of a standard DTCNN architecture for a simple pattern recognition task.
ENTRY
June 16, 1994
RETRIEVAL
ocr (in all.ocr); tiff (in {001-048}.tif)

Neuroprose cohen.thesis.ps.Z(586K)
Thesis/cohen.thesis.ps.Z bardak@ccsg.tau.ac.il 137 pages In this thesis, several methods to train TDRNN are explored, both deterministic and stochastic, and Adaptive Simulated Annealing (ASA) was found to be the best training method.

U of California Berkeley, ERL erl-90-6(dir)
Sorkin, Gregory. Bivariate Time Series Analysis of Simulated Annealing Data.'' UCB//ERL-90-6, January, 1990. 66 pages.
ABSTRACT
Simulated annealing (SA) is a recent technique for finding good solutions to a wide variety of combinatorial optimization problems. Given is a graph with an energy E assigned to each node. In simulated annealing parlance, the nodes are called states', the arcs represent moves' from one state to a neighboring state, and the energy is sometimes called cost'. The simulated algorithm then proceeds as follows:

state = random initial state

repeat (until done) {

T = new temperature repeat (until inner-loop criterion) { newstate = random neighbor(state) Delta E = E(newstate) - E(state) If Delta E < or = 0, state = newstate. Else state = newstate with probability e to the power of (- delta E/T) } }

The temperature T is a parameter of the algorithm. It is always decreased gradually as the annealing proceeds, but the optimal control of T is not understood.

It can easily be shown that this process is equivalent to a random walk on a graph containing self-loops, with random selection of an edge out of a node in proportion to the weight of that edge, and with the edge weights determined by the temperature T.

A typical application is a placement problem, say that of placing 100 circuits on a 10 x 10 grid. Here a state is a permutation of the numbers from 1 to 100, representing a placement. A move could consist of interchanging any two circuits. Thus each state would have C100 = 2 4950 neighbors. We also define the distance between two states as the length of a shortest path connecting them.

While annealing works well on a wide variety of practical problems, it cannot work well on arbitrary graphs, nor on any graph with an arbitrary energy function. The goal of our research has been to characterize the energy landscape itself (the graph and energy function) and the behavior of the annealing algorithm. Because the energy landscape itself is terrifically complex, we have focused on the energy trajectory, i.e. the sequence of energies following each move accepted.

ENTRY
Jul 25, 1994
RETRIEVAL
ocr (in all.ocr); tiff (in {001-066}.tif)

U of California Berkeley, ERL erl-90-5(dir)
Sorkin, Gregory. Univariate Time Series Analysis of Simulated Annealing Data.'' UCB//ERL-90-5, January 1990. 70 pages.
ABSTRACT
Simulated annealing (SA) is a recent technique for finding good solutions to a wide variety of combinatorial optimization problems. Given is a graph with an energy E assigned to each node. In simulated annealing parlance, the nodes are called states', the arcs represent moves' from one state to a neighboring state, and the energy is sometimes called cost'. A typical application is a placement problem, say that of placing 100 circuits on a 10 x 10 grid. Here a state is a permutation of the numbers from 1 to lOO, representing a placement. A move could consist of interchanging any two circuits. Thus each state would have C100= 4950 neighbors. We will also define the 2 distance between any two states as the length of a shortest path connecting them.

The simulated algorithm is as given in figure O.

The temperature T is a parameter of the algorithm. It is always decreased gradually as the annealing proceeds, but the optimal control of T is not understood.

It can easily be shown that this process is equivalent to that of a random walk on the graph, with self-loop edges, present, with random selection of an edge out of a node in proportion to the weight of that edge, and with the edge weights determined by the temperature T.

While annealing works well on a wide variety of practical problems, it cannot work well on arbitrary graphs, nor on any graph with an arbitrary energy function. The goal of my research has been to characterize the energy landscape itself (the graph and energy function) and the behavior of the annealing algorithm.

state = random initial state

repeat (until done) {

T = new temperature

repeat (until inner-loop criterion) { new state = random neighbor(state) delta E = E(newstate) - E(state) If change in E < 0, state = newstate. Else state = newstate with probability e to the power of (- delta E/T) } } Figure 0: Annealing algorithm.

Because the energy landscape itself is terrifically complex, we have focused on the energy trajectory, i.e. the sequence of energies following each move accepted.

ENTRY
Jul 25, 1994
RETRIEVAL
ocr (in all.ocr); tiff (in {001-070}.tif)

Stanford U, Heuristic Prog Proj KSL-93-12.ps(611K)
Altman, R. (1993). A Probabilistic Algorithm for Calculating Structure: Borrowing from Simulated Annealing. Conference on Uncertainty in Artificial Intelligence, Washington D.C. KSL 93-12.

CS/E 94-016 Combining Simulated Annealing with Local Search Heuristics Oliver C. Martin and Steve W. Otto

Aalborg U, M&CS R90-09.ps.Z(151K) tech-reports(dir)
Techreport{Kjaerulff:90, author = "Uffe Kj{\ae}rulff", title = "Triangulation of Graphs --- Algorithms Giving Small Total State Space", location = "tech-reports/R90-09.ps.Z", institution = "Department of Mathematics and Computer Science, Aalborg University, Denmark", year = "1990", type = "Technical Report", number = "R~90-09", month = mar, abstract = "The problem of achieving small total state space for triangulated belief graphs (networks) is considered. It is an NP-complete problem to find a triangulation with minimum state space. Our interest in this topic originates from the field of knowledge engineering where the applied knowledge representation scheme is provided by the notion of causal probabilistic networks (belief networks); CPNs for short. The application of a generalised evidence propagation scheme in CPNs requires triangularity (chordality) of the actual network. The paper includes a survey and evaluation of existing triangulation algorithms most of which are found to be highly ineffective w.r.t. the applied efficiency measure. Simple heuristic methods are presented and found to produce high-quality triangulations. Moreover, we introduce a method by which any non-minimal triangulation may be turned into a minimal one. Furthermore, we present a stochastic algorithm based on a technique known as simulated annealing by which the optimal solution to NP-complete problems may be found with probability arbitrarily close to 1.", keywords = "graph triangulation, graph algorithms, causal probabilistic networks, belief networks, NP-complete problems, simulated annealing." Content-Length: 2734

}

Helsinki U of Tech, CS Dept, Digital Sys Lab A25.ps.Z(122K)
Patric R. J. Östergård. Construction Methods for Covering Codes.'' Research Report A25, Helsinki University of Technology, Digital Systems Laboratory, Espoo, Finland, September 1993.
A covering code in a Hamming space is a set of codewords with the property that any word in the space is within a specified Hamming distance, the covering radius, from at least one codeword. In this thesis, efficient construction methods for such codes are considered. The constructions work, with some exceptions, for codes over alphabets consisting of any number of symbols. Codes over mixed alphabets are also discussed. Most of the methods are developed in order to determine values of K_q(n,R), the minimum number of codewords in a q-ary code of length n and covering radius R. Codes obtained by the constructions prove upper bounds on this function. In many of the constructions simulated annealing, a probabilistic optimization method, has turned out to perform very well. Simulated annealing cannot be used to prove optimality of codes found; in that case, the problem is viewed and solved as a set covering problem. For larger codes, a direct approach is not generally feasible; it is shown how good such codes can be found by combining existing codes, or by imposing some structure on the codes. A matrix method that is presented follows the latter principle; a code constructed by this method consists of cosets of a linear code. To be able to combine existing codes efficiently, it is necessary that the codes involved can be partitioned into subcodes with certain properties; subnorm and norm are concepts that are introduced to describe partitions of codes. Finally, some families of combinatorial methods are presented
A25.ps.Z

Helsinki U of Tech, CS Dept, Digital Sys Lab A22.ps.Z(135K)
Patric R. J. Östergård and Heikki O. Hämäläinen. New Upper Bounds for Binary/Ternary Mixed Covering Codes.'' Research Report A22, Helsinki University of Technology, Digital Systems Laboratory, Espoo, Finland, March 1993.

Keywords: Code construction, covering code, covering radius; football pool problem, mixed code, simulated annealing

A table of upper bounds for K_{3,2}(n_1,n_2;R), the minimum number of codewords in a covering code with n_1 ternary coordinates, n_2 binary coordinates, and covering radius R, in the range n = n_1+n_2 <= 13, R <= 3, is presented. Explicit constructions of codes are given to prove the new bounds and verify old bounds. These binary/ternary covering codes can be used as systems for the football pool game. The results include a new upper bound for the football pool problem for 9 matches, showing that K_3(9,1) <= 1356
A22.ps.Z

Helsinki U of Tech, CS Dept, Digital Sys Lab A18.ps.Z(169K)
Patric R. J. Östergård. Constructions of Mixed Covering Codes.'' Research Report A18, Helsinki University of Technology, Digital Systems Laboratory, Espoo, Finland, December 1991.

Keywords: Covering code, covering radius, football pool problem,; mixed code, simulated annealing

In this work construction methods for so called mixed covering codes are developed. There has been considerable recent growth in the interest in covering codes. They have the property that all words in the space are within a given Hamming distance, called the covering radius, from some codeword. Traditionally, mainly spaces where all coordinates have the same arity (usually 2 or 3, that is, they are binary or ternary) have been discussed. In this work we consider mixed spaces F_{q_1}^{n_1}F_{q_2}^{n_2}\cdots F_{q_m}^{n_m}, where the coordinates can be of varying arities. The approach is very general, no restrictions are set upon m and the arities q_i. The construction methods consist of generalizations of known constructions for covering codes and some completely new constructions. They are divided into three classes; direct constructions, constructions of new codes from old, and the matrix method. Through these constructions upper bounds for the minimal number of codewords in a code covering a certain space with a certain covering radius (K_{q_1,q_2, ... ,q_m}(n_1,n_2, ... ,n_m;R)) are improved. The results include a new record-breaking binary code, proving K_2(12,2) <= 78
A18.ps.Z

Neuroprose indiveri.nn_defect_detect.ps.Z(417K)
indiveri.nn_defect_detect.ps.Z giacomo@dibe.unige.it 9 pages Neural net for vertical feature recognition applied to structural defect detection through FMP inspection. A digital architecture for the implementation is presented. Learning is done off-line through the adaptive simulated annealing algorithm.

Neuroprose zhu.thesis.ps.Z(868K)
Thesis/zhu.thesis.ps.Z hzhu@liverpool.ac.uk 222 pages. Foundation of stochastic adaptive computation based on neural networks. Simulated annealing learning rule superior to backpropagation and Boltzmann machine learning rules. Reinforcement learning for combinatorialstate space and action space. (Mathematics with simulation results plus philosophy.)

Boston U CS 94-001(1K) 94-001-maxclique.Z(98K)
abstracts/94-001 ::::::::::::::

Title: On the Performance of Polynomial-time CLIQUE Algorithms on Very Large Graphs Author: Steve Homer and Marcus Peinado, Boston University Date: January 1994

Abstract:

The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halld\'{o}rsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.

U of Virginia CS&IPC CS-93-52.ps.Z(151K)
[CS-93-52] James M. Varanelli and James P. Cohoon, A Fast Method for Generalized Starting Temperature Determination in Two-Stage Simulated Annealing Systems, September 23, 1993.

CS/E 93-017 Combining Simulated Annealing with Local Search Heuristics Olivier C. Martin and Steve W. Otto, September 1993

German Natl Res Ctr CS, Reinforcement Learning zhu.thesis.ps.Z(868K)
zhu.thesis.ps.Z hzhu@liverpool.ac.uk 222 pages Foundation of stochastic adaptive computation based on neural networks. Simulated annealing learning rule superior to backpropagation and Boltzmann machine learning rules. Reinforcement learning for combinatorial state space and action space. (Mathematics with simulation results plus philosophy.)

tr-ri-93-117

R. Diekmann, R. Lueling, J. Simon

Problem independent distributed simulated annealing and its applications

February 1993

Abstract:

Simulated annealing has proven to be a good technique for solving hard combinatorial optimization problems. Some attempts at speed- ing up annealing algorithms have been based on shared memory mul- tiprocessor systems. Also parallelizations for certain problems on distributed memory multiprocessor systems are known.

In this paper, we present a problem independent general purpose parallel implementation of simulated annealing on large distri- buted memory message-passing multiprocessor systems. The sequen- tial algorithm is studied and we give a classification of com- binatorial optimization problems together with their neighborhood structures. Several parallelization approaches are examined con- sidering their suitability for problems of the various classes. For typical representatives of the different classes good paral- lel simulated annealing implementations are presented.

We describe in detail several 'tricks' increasing efficiency and attained solution quality of the different parallel implementa- tions. Extensive measurements of efficiency, solution quality and other parameters of the algorithms are presented on different numbers of processors. These measurements show, that our algo- rithms scale up to more that 120 processors. Some applications are described in detail, showing the practical relevance of our work. All algorithms are implemented in OCCAM-2 on a free confi- gurable transputer system.

Keywords:

Combinatorial optimization, simulated annealing, parallel pro- cessing, distributed memory, transputer, travelling salesman, partitioning, link assignment, network construction

U of Virginia CS&IPC IPC-91-03.ps.Z(245K)
[IPC-91-03] D.E. Brown and C.L. Huntley, A Practical Application of Simulated Annealing to Clustering, February 1990.

Aalborg U, M&CS kjaerulff92a.ps.Z(105K)
Article{kjaerulff:92a, author = "Kj{\ae}rulff, U.", title = "Optimal Decomposition of Probabilistic Networks by Simulated Annealing", location = "papers/kjaerulff92a.ps.Z", journal = "Statistics and Computing", year = "1992", volume = "2", pages = "7--17", abstract = "This paper investigates the applicability of a Monte Carlo technique known as simulated annealing' to achieve optimum or sub-optimum decompositions of probabilistic networks under bounded resources. High-quality decompositions are essential for performing efficient inference in probabilistic networks. Optimum decomposition of probabilistic networks is known to be NP-hard (Wen 1990). The paper proves that cost-function changes can be computed locally, which is essential to the efficiency of the annealing algorithm. Pragmatic control schedules which reduce the running time of the annealing algorithm are presented and evaluated. Apart from the conventional temperature parameter, these schedules involve the radius of the search space as a new control parameter. The evaluation suggests that the inclusion of this new parameter is important for the success of the annealing algorithm for the present problem.", keywords = "Simulated annealing, global optimization, Monte Carlo algorithms, graph decomposition, probabilistic networks, NP-complete problems" }

rosen.advsim.ps.Z rosen@ringer.cs.utsa.edu Preprint: Comparative performances of Advanced Simulated Annealing Methods on optimizing a nowhere-differentiable function

U of Missouri-Rolla CS 92-06.ps(223K)
92-05-FAULT-TOLERANT DISTRIBUTED DATABASE LOCK MANAGERS FORMALLY DERIVED FRO M PROGRAM VERIFICATION Hanan Lutfiyya, Martina Schollmeyer and Bruce McMillin

This work was supported in part by the National Science Foundation under Grant N umbers MIP-89009749 and CDA-8820714, and in part by the AMOCO Faculty Development Program.

This paper presents a system for formally deriving executable assertions that ca n be evaluated in the faulty distributed computing environment. Since executable assertions for fault tolerance need to show that a program meets its specification and, since program verification is the process of formally showing that a program satisfies some pa rticular properties with respect to its specification, we use program verification as a basis for derivation. It is well known that in the sequential computing environment the assertions from a program verification proof outline may be translated directly into execut able assertions. However, due to the lack of global state information in a distributed program, this above translation will not work . The transformation system described in this paper addresses this problem by consistently communicating state information at run ti me relevant to the verification proof. The applicability of the transformation system is demonstrated through treatment of a distributed transaction scheduler.

Keywords: Executable Assertions, Formal Methods, Concurrent Program Verificatio n, Fault Tolerance, Transformation CSC-92-06-RELAXING SYNCHRONIZATION DISTRIBUTED SIMULATED ANNEALING Chul-Eui Hong and Bruce M. McMillin

This work was supported in part by the National Science Foundation under Grant N umber CDA-8820714 and in part by the University of Missouri Weldon Springs Foundation.

Simulated annealing is an attractive, but expensive, heuristic for approximating the solution to combinatorial optimization problems. Attempts to parallelize simulated annealing, particularly on distributed memory multicomputers, are hampered by the algorithm's requirement of a globally consistent system state. In a multicomputer, maintain ing the global state S involves explicit message traffic and is a critical performance bottleneck. To mitigate this bottleneck, it becomes necessary to amortize the overhead of these state updates over as many parallel state changes as possible.By using this tech nique, errors in the actual cost C(S) of a particular state S will be introduced into the annealing process. This paper places analyti cally derived bounds on this error in order to assure convergence to the correct result. The resulting parallel Simulated Annealing a lgorithm dynamically changes the frequency of global updates as a function of the annealing control parameter, i.e.temperature. Imple mentation results on an Intel iPSC/2 are reported.

Digital Equipment Corp, Paris Res Lab PRL-RR-6.ps.Z(103K)
PRL-RR-6.ps.Z PRL Research Report number 6

Binary Periodic Synchronizing Sequences. Marcin Skubiszewski. May 1991.

In this article, we consider words over {0,1}. The autodistance of such a word is the lowest among the Hamming distances between the word and its images by circular permutations other than identity; the word's reverse autodistance is the highest among these distances. For each l>2, we study the words of length l whose autodistance and reverse autodistance are close to l/2 (we call such words synchronizing sequences. We establish, for every l>3, an upper bound on the autodistance of words of length l. This upper bound, called up(l), is very close to l/2. We briefly describe the maximal period linear recurring sequences, a previously known family of words overn{0,1}; such words exist for every length of the form ll = 2-1 and their autodistances achieve the upper bound up (l). Examples of words whose autodistance and reverse autodistance are both equal or close to up(l) are discussed; we describe the method (based on simulated annealing) which was used to find the examples. We prove that, for sufficiently large l, an arbitrarily high proportion of words of length will have both their autodistance and reverse autodistance very close to up(l).

U of Missouri-Rolla CS 91-09.ps(199K)
91-09-COMPOSITE STOCK CUTTING THROUGH SIMULATED ANNEALING Hanan Lutfiyya and Bruce McMillin Computer Science Department

Pipatpong Poshyanonda and Cihan Dagli Department of Engineering Management

Journal of Mathematical and Computer Modeling, Pergamon Press, New York, Vol. 16 , No. 1, pp. 57-74.

This work was supported in part by the National Science Foundation under Grant N umbers MIP-8909749 and CDA-8820714, in part by the AMOCO Faculty Development Program, in part by the Manufacturing Rese arch and Training Center (MRTC), in part by the McDonnell Douglas Corporation, and in part by the University of Missouri Weldon Springs Fund.

This paper explores the use of Simulated Annealing as an optimization technique for the problem Composite Material Stock Cutting. The shapes are not constrained to be convex polygons or even regular shapes. Ho wever, due to the composite nature of the material, the orientation of the shapes on the stock is restricted. For placeme nts of various shapes, we show how to determine a cost function, annealing parameters, and performance.

Index Terms: Optimization, Probabilistic Methods, Stock Cutting, Bin Packing.

### Agent Based Frameworks

National Research Council of Canada, Knowledge Systems Lab NRC-38329.ps.Z(20K)
Innes A. Ferguson. Integrated Control and Coordinated Behavior: a Case for Agent Models.'' Intelligent Agents - Proc. of the 1994 Workshop on Agent Theories, Architectures, and Languages. Lecture Notes in Computer Science Series, Springer-Verlag: Berlin. 1995.
This paper presents a new architecture for controlling autonomous agents in dynamic multi-agent worlds, building on previous work addressing reactive and deliberative control methods. The proposed multi-layered architecture allows a rationally bounded, goal-directed agent to reason predictively about potential conflicts by constructing causal theories which explain other agents' observed behaviors and hypothesize their intentions; at the same time it enables the agent to operate autonomously and to react promptly to changes in its real-time environment. A principal aim of this research is to understand the role different functional capabilities play in constraining an agent's behavior under varying environmental conditions. To this end, an experimental testbed has been constructed comprising a simulated multi-agent world in which a variety of agent configurations and behaviors have been investigated. A number of experimental findings are reported
NRC-38329.ps.Z

National Research Council of Canada, Knowledge Systems Lab NRC-38326.ps.Z(59K)
Innes A. Ferguson. Autonomous Agent Control: a Case for Integrating Models and Behaviors.'' Proc. AAAI Fall Symposium on Control of the Physical World by Intelligent Agents: 46-54. 1994.
It is becoming widely accepted that neither purely reactive nor purely deliberative control techniques are capable of producing the range of behaviors required of intelligent computational agents in dynamic, unpredictable, multi-agent worlds. This paper presents a new architecture for controlling autonomous agents, building on previous work addressing reactive and deliberative control methods. The proposed multi-layered architecture allows a resource-bounded, goal-directed agent to reason predictively about potential conflicts by constructing causal theories or models which explain other agents' observed behaviors and hypothesize their goals and intentions; at the same time it enables the agent to operate autonomously and to react promptly to changes in its real-time environment. A principal aim of this research is to understand the role different functional capabilities play in constraining an agent's behavior under varying environmental conditions. To this end, an experimental testbed has been constructed comprising a simulated multi-agent world in which a variety of agent configurations and behaviors have been investigated. A number of experimental findings are reported
NRC-38326.ps.Z

National Research Council of Canada, Knowledge Systems Lab NRC-38327.ps.Z(59K)
Innes A. Ferguson. `On the Role of Device Modelling for Autonomous Agent Control.'' Proc. AAAI Workshop on Representing and Reasoning with Device Function: 28-36. 1994.
It is becoming widely accepted that neither purely reactive nor purely deliberative control techniques are capable of producing the range of behaviors required of intelligent computational agents in dynamic, unpredictable, multi-agent worlds. This paper presents a new architecture for controlling autonomous agents, building on previous work addressing reactive and deliberative control methods. The proposed multi-layered architecture allows a resource-bounded, goal-directed agent to reason predictively about potential conflicts by constructing causal theories or device models which explain other agents' observed behaviors and hypothesize their intentions and function; at the same time it enables the agent to operate autonomously and to react promptly to changes in its real-time environment. A principal aim of this research is to understand the role different behavioral capabilities play in constraining an agent's function under varying environmental conditions. To this end, an experimental testbed has been constructed comprising a simulated multi-agent world in which a variety of agent configurations and behaviors have been investigated. A number of experimental findings are reported
NRC-38327.ps.Z