Aimo Törn: Global Optimization


Previous slide - Next slide

Elements of Global Optimization Methods

Convergence with Probability One

Because convergence is not in general possible, we may then lower our ambition and only require convergence with probability 1. This is of course possible only for algorithms that can be made to run forever.

This seems to be a modest requirement for any serious algorithm. However, the requirement is very weak and is not of very much use in practice, which can be seen from the following reasoning.

It has been pointed out that some methods do not have any theoretical convergence properties (eg. Price's CSR) and they are therefore considered inferior to other methods.

Assume that we have a method that can be made to run forever. Add the following: For i= 1,2,3,... when N reaches 10iNf sample a point at random in A to be a candidate to include in the converging set of CSR.

The modified method will converge with probability 1, but because Nf is the maximal affordable number of function evaluations, the two algorithms will give equivalent results in any real application.