Orbital library

orbital.algorithm.template
Class SimulatedAnnealing

java.lang.Object
  extended by orbital.algorithm.template.GeneralSearch
      extended by orbital.algorithm.template.LocalOptimizerSearch
          extended by orbital.algorithm.template.SimulatedAnnealing
All Implemented Interfaces:
java.io.Serializable, AlgorithmicTemplate, EvaluativeAlgorithm, HeuristicAlgorithm, ProbabilisticAlgorithm

public class SimulatedAnnealing
extends LocalOptimizerSearch

Simulated Annealing (SA) search. A probabilistic and heuristic search algorithm and local optimizer.

No special implementation data structure since simulated annealing discards old nodes. Because of this "amnesy", simulated annealing is a suboptimal search strategy and simulated annealing is not complete (but see below). However, due to its Monte Carlo nature it has a certain probability of avoiding local optima and actually reaching a global optimum. In practical applications, it usually scores much better than random-restart hill-climbing.

Simulated annealing problems can omit checking for solutions and simply wait until the temperature drops to zero.

Another possibility of using simulated annealing is to develop to a good stable balance of f at a high temperature, first, and then decrease the temperature. Monitoring acceptance probability ensuring to keep it at a medium rate might improve convergence. Also performing some optimization at temperature 0 still (equalling ordinary hill-climbing then), ensures that the solution is at least at a local optimum.

The Boltzman distribution specifies the probability of being at enery level E:=f(s) given the current temperature T

P(E) = 1/z * e-E/T
Where z is a normalization constant ensuring that P sums up to 1.

theorem of convergence
If T(n) ≥ (Ø+1)Δ / ㏒(n+2) and T(n)→0 (n→∞)
Then P("at a global optimum on the n-th move") → 1 (n→∞)
Where
  • T:NR is the "temperature" scheduling function.
  • Ø := maximum distance of two solutions (number of moves in between).
  • Δ := max {|f(s)-f(sʹ)| ¦ s,sʹ∈S ∧ P(sʹ|s)>0}. Note that like in MDPs P(sʹ|s) is the probability of reaching sʹ from s with one move.
(behaviour at fixed temperature T, "equilibrium")
If ∃limn→∞ P(Sn=s) and |⋃a∈A(s)t(s,a)|≤b is bounded, and the selection of next states occurs uniformly (each with probability 1/b) and P(s→sʹ) / P(sʹ→s) = e-f(sʹ) / e-f(s)
Then P(Sn=s) → e-f(s) / ∑sʹ∈S e-f(sʹ) (n→∞) which does only depend on f(s).
Where
  • P(Sn=s) is the probability of being in state s at time n.
  • a∈A(s)t(s,a) is the set of states reachable from s (with any one action).
  • P(s→sʹ) is the probability that the search algorithm accepts a move from s∈S to sʹ∈S.

limn→∞ P(Sn=s) converges independent from the initial state s0 if the Markov system underlying the state transition is ergodic (the graph spanned by all transitions with probability >0 is connected i.e. from any s∈S to any t∈S the is a path from s to t with non-zero transition probability). At least, it also converges if the Markov system is homogenous (i.e. transition probabilities are independent from time) and aperiodic (i.e. ¬∃n∈N Pn=I, with P∈[0,1]S×S being the matrix of transition probabilities). Also, for example, the condition with the acceptance probability is satisfied by simulated annealing, and thus metropolis search. (metropolis search is simulated annealing at a fixed temperature T).

Author:
André Platzer
See Also:
HillClimbing, ThresholdAccepting, Serialized Form
Note:
The Boltzman-machine is simulated annealing with the Boltzman distribution applied also to cases of improvements, instead of accepting all improvements., could introduce MeanfieldAnnealing (alias Hopfield/Tank neural networks), The central idea of simulated annealing is to deepen the local minima by some exponential transformation.

Nested Class Summary
 
Nested classes/interfaces inherited from class orbital.algorithm.template.LocalOptimizerSearch
LocalOptimizerSearch.LocalSelection
 
Nested classes/interfaces inherited from interface orbital.algorithm.template.HeuristicAlgorithm
HeuristicAlgorithm.Configuration, HeuristicAlgorithm.PatternDatabaseHeuristic
 
Nested classes/interfaces inherited from interface orbital.algorithm.template.EvaluativeAlgorithm
EvaluativeAlgorithm.EvaluationComparator
 
Field Summary
 
Fields inherited from class orbital.algorithm.template.LocalOptimizerSearch
BEST_LOCAL_SELECTION, FIRST_LOCAL_SELECTION
 
Constructor Summary
SimulatedAnnealing(Function heuristic, Function schedule)
          Create a new instance of simulated annealing search.
 
Method Summary
 Function complexity()
          O(∞).
protected  java.util.Iterator createTraversal(GeneralSearchProblem problem)
          Define a traversal order by creating an iterator for the problem's state space.
 Function getEvaluation()
          f(n) = h(n).
 Function getHeuristic()
          Get the heuristic function used.
 Function getSchedule()
          Get the scheduling function.
 boolean isCorrect()
          Local optimizers are usally not correct.
 boolean isOptimal()
          Local optimizers are not optimal (usually).
 void setHeuristic(Function heuristic)
          Set the heuristic function to use.
 void setSchedule(Function schedule)
           
 Function spaceComplexity()
          O(b) where b is the branching factor and d the solution depth.
 
Methods inherited from class orbital.algorithm.template.LocalOptimizerSearch
getRandom, search, setRandom
 
Methods inherited from class orbital.algorithm.template.GeneralSearch
getProblem, solve, solveImpl
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface orbital.algorithm.template.AlgorithmicTemplate
solve
 

Constructor Detail

SimulatedAnnealing

public SimulatedAnnealing(Function heuristic,
                          Function schedule)
Create a new instance of simulated annealing search.

Parameters:
heuristic - the heuristic cost function h:S→R to be used as evaluation function f(n) = h(n).
schedule - a mapping NR from time to "temperature" controlling the cooling, and thus the probability of downward steps. Algorithm stops if the temperature drops to 0 (or isSolution is true, or it fails due to a lack of alternative expansion nodes).
Preconditions:
( limt→∞schedule(t) = 0 ∧ schedule decreases monotonically ) || abnormal(schedule)
Method Detail

getEvaluation

public Function getEvaluation()
f(n) = h(n).

Returns:
the evaluation function f:S→R used to evaluate (either utility or cost) value of states.

complexity

public Function complexity()
O(∞).

Returns:
the function f for which the solve() method of this algorithm runs in O(f(n)) assuming the algorithmic problem hook to run in O(1).
See Also:
AlgorithmicTemplate.solve(AlgorithmicProblem)

spaceComplexity

public Function spaceComplexity()
O(b) where b is the branching factor and d the solution depth.

Returns:
the function f for which the solve() method of this algorithm consumes memory with an amount in O(f(n)) assuming the algorithmic problem hook uses space in O(1).
See Also:
AlgorithmicTemplate.solve(AlgorithmicProblem)

isOptimal

public boolean isOptimal()
Local optimizers are not optimal (usually). Because the scheduling function may be too fast, such that the search cannot yet have reached the global optimum.

Returns:
whether this search algorithm is optimal, i.e. whether solutions found are guaranteed to be optimal.

isCorrect

public boolean isCorrect()
Local optimizers are usally not correct. Because the scheduling function is (usually) independent from the current state, and thus may be too fast, such that the search has not yet reached an optimum.

Specified by:
isCorrect in interface ProbabilisticAlgorithm
Returns:
whether all solutions found by this algorithms are correct despite the approximative nature of this algorithm. Monte Carlo algorithms are not correct, while Las Vegas algorithms usually are.

createTraversal

protected java.util.Iterator createTraversal(GeneralSearchProblem problem)
Description copied from class: GeneralSearch
Define a traversal order by creating an iterator for the problem's state space.

Lays a linear order through the state space which the search can simply follow sequentially. Thus a traversal policy effectively reduces a search problem through a graph to a search problem through a linear sequence of states. Of course, the mere notion of a traversal policy does not yet solve the task of finding a good order of states, but only encapsulate it. Complete search algorithms result from traversal policies that have a linear sequence through the whole state space.

Specified by:
createTraversal in class GeneralSearch
Parameters:
problem - the problem whose state space to create a traversal iterator for.
Returns:
an iterator over the options of the problem's state space thereby encapsulating and hiding the traversal order.
See Also:
Factory Method, GeneralSearch.OptionIterator

getHeuristic

public Function getHeuristic()
Description copied from interface: HeuristicAlgorithm
Get the heuristic function used.

Specified by:
getHeuristic in interface HeuristicAlgorithm
Returns:
the heuristic cost function h:S→R estimating h*.

setHeuristic

public void setHeuristic(Function heuristic)
Description copied from interface: HeuristicAlgorithm
Set the heuristic function to use.

An heuristic cost function h:S→R is estimating the cost to get from a node n to a goal G. For several heuristic algorithms this heuristic function needs to be admissible

h ≤ h*
i.e. h is a lower bound for the effective cost function h*. Then the pathmax heuristic f(s') := max {f(s), g(s') + h(s')} (for transitions s→s'=t(s,a) with a∈A(s)) is monotonic.

A heuristic cost function h is monotonic :⇔ the f-costs (with h) do not decrease in any path from the initial state ⇔ h obeys the triangular inequality

∀s∈S,a∈A(s) h(s) ≤ h(s') + c(s,a) with s' = t(s,a)
i.e. the sum of the costs from A to C and from C to B must not be less than the cost from A to B.

A simple improvement for heuristic functions is using pathmax.

Specified by:
setHeuristic in interface HeuristicAlgorithm
Parameters:
heuristic - the heuristic cost function h:S→R estimating h*. h will be embedded in the evaluation function f.
See Also:
"Pearl, J. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, Reading, Massachusetts. 1984."

getSchedule

public Function getSchedule()
Get the scheduling function.

Returns:
a mapping NR from time to "temperature" controlling the cooling, and thus the probability of downward steps. Algorithm stops if the temperature drops to 0 (or isSolution is true, or it fails due to a lack of alternative expansion nodes).

setSchedule

public void setSchedule(Function schedule)

Orbital library
1.3.0: 11 Apr 2009

Copyright © 1996-2009 André Platzer
All Rights Reserved.