mystic中用线性约束和界限最小化非凸函数

问题描述

假设我有一个非凸目标函数 loss,它接受​​一个名为 np.ndarrayX,其形状为 (n,) 并返回一个 float 数。目标有很多 many ma​​ny 局部最小值,因为它本质上是 np.round(X * c,2)函数,其中 c 是另一个形状为 (n,) .

你可以想象这样的事情:

def loss(X: np.ndarray) -> float:
    c = np.array([0.1,0.5,-0.8,7.0,0.0])
    X_rounded = np.round(X * c,2)
    return rosen(X_rounded)

线性约束用两个常数矩阵表示(也存储为numpy的ndarray),A的形状为(m,n)和b的形状为(m,) .我需要在保持 loss 的同时最小化 X 相对于 A.dot(X) == b。另外,X 的每个元素都必须服从 0 <= X_i <= 2我有一个不错的初始猜测 X0 = [1,1,...,1]

我不需要全局最小值,搜索可以在 loss(X) <= 1 后立即停止。目标主要是用 sql 编写的,因此速度非常慢,所以我还希望优化在 loss 被评估约 200 次时终止。 (这不是硬性要求,您也可以在优化运行 5 分钟后终止。)

有了 scipy,我可以做到

rv = minimize(loss,initial_guess,method='SLSQP',bounds=[(0,2)] * n,constraints={
                  'type': 'eq','fun': lambda x: A.dot(x) - b
              },options={
                  'maxiter': 5
              })

我对这个解决方案不满意,因为结果比人为的初始猜测更糟糕(在全局最小值附近采样作为冒烟测试),可能是由于大量的局部最小值和一些数值问题?此外,我无法估计每次迭代的目标调用次数(否则我可以通过设置 maxiter 来限制创新次数)。

如何使用可能更灵活的 mystic 做得更好?

解决方法

由于我不知道 Ab 是什么,所以我要即兴发挥。所以它不会与您的问题完全相同,但应该足够接近。

让我们通过构建损失函数和约束来设置问题。可能有更好的方法来构建约束,但以下内容非常通用(虽然有点难看):

>>> import mystic as my
>>> import numpy as np
>>> from mystic.models import rosen
>>>
>>> A = np.array([[9.,0.,8.,-1],...               [1.,1.,-1.,0.],...               [2.,-2.,6.,5.]])
>>> b = np.array([18.,.75,11.5])
>>> c = np.array([0.1,0.5,-0.8,7.0,0.0])
>>>
>>> def loss(x):
...     x_rounded = np.round(x * c,2)
...     return rosen(x_rounded)
...
>>> cons = my.symbolic.linear_symbolic(A,b)
>>> cons = my.symbolic.solve(cons)
>>> cons = my.symbolic.generate_constraint(my.symbolic.generate_solvers(cons))
>>> bounds = [(0,2)] * len(c)

然后尝试求解全局最小值:

>>> stepmon = my.monitors.VerboseMonitor(1)
>>> rv = my.solvers.diffev2(loss,x0=bounds,bounds=bounds,constraints=cons,itermon=stepmon,disp=1,npop=20)
Generation 0 has ChiSquare: 15478.596962
Generation 1 has ChiSquare: 1833.714503
Generation 2 has ChiSquare: 1833.714503
Generation 3 has ChiSquare: 270.601079
Generation 4 has ChiSquare: 160.690618
Generation 5 has ChiSquare: 160.690618
Generation 6 has ChiSquare: 127.289639
Generation 7 has ChiSquare: 127.289639
Generation 8 has ChiSquare: 127.289639
Generation 9 has ChiSquare: 123.054668
Generation 10 has ChiSquare: 123.054668
Generation 11 has ChiSquare: 123.054668
Generation 12 has ChiSquare: 122.561794
Generation 13 has ChiSquare: 121.069338
Generation 14 has ChiSquare: 120.828279
Generation 15 has ChiSquare: 117.732442
Generation 16 has ChiSquare: 117.732442
Generation 17 has ChiSquare: 117.340042
Generation 18 has ChiSquare: 117.340042
Generation 19 has ChiSquare: 117.340042
Generation 20 has ChiSquare: 117.340042
Generation 21 has ChiSquare: 117.340042
Generation 22 has ChiSquare: 116.750933
Generation 23 has ChiSquare: 116.750933
Generation 24 has ChiSquare: 116.750933
Generation 25 has ChiSquare: 116.750933
Generation 26 has ChiSquare: 116.750933
Generation 27 has ChiSquare: 116.750933
Generation 28 has ChiSquare: 116.750933
Generation 29 has ChiSquare: 116.750933
Generation 30 has ChiSquare: 116.750933
Generation 31 has ChiSquare: 116.750933
Generation 32 has ChiSquare: 116.750933
Generation 33 has ChiSquare: 116.750933
Generation 34 has ChiSquare: 116.750933
Generation 35 has ChiSquare: 116.750933
Generation 36 has ChiSquare: 116.750933
Generation 37 has ChiSquare: 116.750933
Generation 38 has ChiSquare: 116.750933
Generation 39 has ChiSquare: 116.750933
Generation 40 has ChiSquare: 116.750933
Generation 41 has ChiSquare: 116.750933
Generation 42 has ChiSquare: 116.750933
Generation 43 has ChiSquare: 116.750933
Generation 44 has ChiSquare: 116.750933
Generation 45 has ChiSquare: 116.750933
Generation 46 has ChiSquare: 116.750933
Generation 47 has ChiSquare: 116.750933
Generation 48 has ChiSquare: 116.750933
Generation 49 has ChiSquare: 116.750933
Generation 50 has ChiSquare: 116.750933
Generation 51 has ChiSquare: 116.750933
STOP("VTRChangeOverGeneration with {'ftol': 0.005,'gtol': 1e-06,'generations': 30,'target': 0.0}")
Optimization terminated successfully.
         Current function value: 116.750933
         Iterations: 51
         Function evaluations: 1040

>>> A.dot(rv)
array([18.,0.75,11.5 ])

这可行(它可能仍然不是全局最小值)...但需要一些时间。所以,让我们尝试一个更快的本地求解器。

>>> stepmon = my.monitors.VerboseMonitor(1)
>>> rv = my.solvers.fmin_powell(loss,x0=[1]*len(c),disp=1)
Generation 0 has ChiSquare: 244559.856997
Generation 1 has ChiSquare: 116357.59447400003
Generation 2 has ChiSquare: 121.23445799999999
Generation 3 has ChiSquare: 117.635447
Generation 4 has ChiSquare: 117.59764200000001
Generation 5 has ChiSquare: 117.59764200000001
Optimization terminated successfully.
         Current function value: 117.597642
         Iterations: 5
         Function evaluations: 388
STOP("NormalizedChangeOverGeneration with {'tolerance': 0.0001,'generations': 2}")

>>> A.dot(rv)
array([18.,11.5 ])

还不错。但是,您想要限制 loss 的评估次数,并且还希望能够在 loss 接近最小值时停止...所以假设在 loss(x) <= 120 时停止。我还将函数评估的数量限制为 200

>>> stepmon = my.monitors.VerboseMonitor(1)
>>> rv = my.solvers.fmin_powell(loss,maxfun=200,gtol=None,ftol=120)
Generation 0 has ChiSquare: 244559.856997
Generation 1 has ChiSquare: 116357.59447400003
Generation 2 has ChiSquare: 121.23445799999999
Generation 3 has ChiSquare: 117.635447
Optimization terminated successfully.
         Current function value: 117.635447
         Iterations: 3
         Function evaluations: 175
STOP("VTRChangeOverGeneration with {'ftol': 120,'target': 0.0}")

>>> A.dot(rv)
array([18.,11.5 ])
>>> rv
array([1.93873933,0.00381084,1.19255017,0.0807893,0.0949684 ])

如果您使用求解器的类接口会更加灵活,但我会留待下次再说。