mymesh.register.optimize#

mymesh.register.optimize(objective, method, x0=None, bounds=None, optimizer_args=None)[source]#

Optimization interface for registration. This function interfaces with optimizers from scipy.optimize and pdfo.

Parameters:
  • objective (callable) – Objective function that takes a single input, x.

  • method (str) –

    Optimization algorithm. For the most part, these are direct interfaces to either scipy.optimize or pdfo, however some predefined options are chosen for certain methods.

    These are global optimization methods that see the global minimum of the objective function within specified bounds. The bounds input is required for all of these methods.

    • ’direct’: Uses the DIRECT algorithm through scipy.optimize.direct().

    • directl’: Uses the locally-biased version DIRECT algorthim through scipy.optimize.direct(). This is equivalent to using method=’direct’ with optimizer_args=dict(locally_biased=True).

    • ’differential_evolution’: Uses the differential evolution algorithm through scipy.optimize.differential_evolution()

    • ’brute’: Uses a brute force approach, evaluating the function at every point within a multidimensional grid using scipy.optimize.brute(). It’s recommeded to use the ‘Ns’ option to specify the number of points to sample along each axis (optimizer_args=dict(Ns=n)), the default value of 20 may be too high for many registration applications for large datasets. The total number of function evaluations is Ns**len(x). By default, the optional ‘finish’ input, which performs local optimization following the conclusion of the brute force search, is turned off, but can be reactivated with optimizer_args=dict(finish=True)

    All minimizers available through scipy.optimize.minimize() are available. One exception is that, if pdfo is installed, it will be used instead of scipy if method=’cobyla’. If method=’scipy’ is given, the default optimizer will be chosen by scipy based on the given problem (depends on the presence of bounds or constraints).

    pdfo, or “Powell’s derivative free optimizers” are a group of algorithms developed by M. J. D. Powell for gradient/derivative free optimization. pdfo offers a scipy-like interface to Powell’s algorithms.

    • ’uobyqa’: Unconstrained Optimization BY Quadratic Approximation

    • ’newuoa’: NEW Unconstrained Optimization Algorithm

    • ’bobyqa’: Bounded Optimization BY Quadratic Approximation

    • ’lincoa’: LINear Constrained Optimization Algorithm

    • ’cobyla’: Constrained Optimization BY Linear Approximation

x0array_like, optional

Initial guess for the optimization, by default None. This is required for local, but not global methods.

boundsarray_like, list of tuples, optional

List of bounds for each parameter, e.g. [(-1, 1), (-1, 1), …], by default None Bounds are required for some optimizers, particularly the global methods.

optimizer_argsdict, optional

Additional input arguments to the chosen method, by default None. See available options in the documentation of scipy or pdfo.

Example (method=’nelder-mead’):

optimizer_args = dict(maxiter=100, fatol=1e-3)

Note that the optional arguments or methods differ. Some have an options input, which must be defined within optimizer_args Example (method=’powell’):

optimizer_args = dict(options=dict(maxiter=100))

Returns:

  • x (np.ndarray) – Optimized parameters

  • f (float) – Value of the objective function at the identified optimal parameters