DyCors Algorithm

class DyCors.core.DyCorsMinimize(fun, x0, args, method, jac, bounds, options, restart, verbose)[source]

Implementation of DyCors algorithm.

For a full description of the different options see minimize()

initialize()[source]

Compute function and gradient evaluations of initial sampling points.

initialize_restart()[source]

Initialize optimization from a previous optimization.

restart_dycors()[source]

Restart DyCors keeping only the best point.

select_new_pts()[source]

Evaluate trial points using the surrogate model, compute scores and select the new points where we want to run the expensive function evaluation.

trial_points()[source]

Generate trial points.

update()[source]

Update information after every iteration.

DyCors.core.minimize(fun, x0=None, args=(), method='RBF-Cubic', jac=None, bounds=None, options=None, restart=None, verbose=True)[source]

Minimization of scalar function of one or more variables using DyCors algorithm [1].

This function is a wrapper around the class DyCorsMinimize.

The only mandatory parameters are fun and either x0 or restart.

Parameters
funcallable

The objective function to be minimized.

fun(x, *args) -> float

where x is an 1-D array with shape (d,) and args is a tuple of the fixed parameters needed to completely specify the function.

x0ndarray, shape (m,d,), optional

Starting sampling points. m is the number of sampling points and d is the number of dimensions.

argstuple, optional

Extra arguments passed to the objective function and its derivatives (fun and jac functions).

methodstr, optional

Kernel function to be used. Should be:

  • ‘RBF-Expo’ : derivative-free with exponential kernel

  • ‘RBF-Matern’ : derivative-free with Matérn kernel

  • ‘RBF-Cubic’ : derivative-free with cubic kernel

  • ‘GRBF-Expo’ : gradient-enhanced with exponential kernel

  • ‘GRBF-Matern’: gradient-enhanced with Matérn kernel

  • ‘GRBF-Cubic’ : gradient-enhanced with cubic kernel

The default method is ‘RBF-Cubic’. See Kernel functions for more details on each specific method.

jaccallable, optional

It should return the gradient of fun.

jac(x, *args) -> array_like, shape (d,)

where x is an 1-D array with shape (d,) and args is a tuple of the fixed parameters needed to completely specify the function. Only necessary for ‘GRBF-Expo’, ‘GRBF-Matern’ and ‘GRBF-Cubic’ methods.

boundsndarray, shape (d,2,), optional

Bounds on variables. If not provided, the default is not to use any bounds on variables.

optionsdict, optional

A dictionary of solver options:

Nmaxint

Maximum number of function evaluations in serial.

sig0float or ndarray

Initial standard deviation to create new trial points.

sigmfloat or ndarray

Minimum standard deviation to create new trial points.

Tsint

Number of consecutive successful function evaluations before increasing the standard deviation to create new trial points.

Tfint

Number of consecutive unsuccessful function evaluations before decreasing the standard deviation to create new trial points.

weights: list

Weights that will be used to compute the scores of the trial points.

lfloat or ndarray

Kernel internal parameter. Kernel width.

nufloat (half integer)

Matérn kernel internal parameter. Order of the Bessel Function.

optim_looboolean

Whether or not to use optimization of internal parameters.

nits_looint

Optimize internal parameters after every nits_loo iterations.

warningsboolean

Whether or not to print solver warnings.

restartResultDyCors, optional

Restart optimization from a previous optimization.

verboseboolean, optional

Whether or not to print information of the solver iterations.

Returns
resResultDyCors

The optimization result represented as a ResultDyCors object. Important attributes are: x the solution array, success a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination. See ResultDyCors for a description of other attributes.

References

1

Regis, R G and C A Shoemaker. 2013. Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization. Engineering Optimization 45 (5): 529-555.

Examples

Let us consider the problem of minimizing the quadratic function.

\[f(x) = x^2\]
>>> import numpy as np
>>> from DyCors import minimize

We define the objective function, the initial sampling points and the boundaries of the domain as follows:

>>> fun = lambda x: x[0]**2
>>> x0 = np.array([-2.0, 2.0])[:,np.newaxis]
>>> bounds = np.array([-5.0, 5.0])[np.newaxis,:]

Finally, we run the optimization and print the results:

>>> res = minimize(fun, x0, bounds=bounds,
...                options={"warnings":False},
...                verbose=False)
>>> print(res["x"], res["fun"])
[1.32665389e-05] 1.7600105366604962e-10

We can also restart the optimization:

>>> res = minimize(fun, bounds=bounds,
...                options={"Nmax":500, "warnings":False},
...                restart=res, verbose=False)
>>> print(res["x"], res["fun"])
[1.55369877e-06] 2.413979870364038e-12