Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
run: julia --project=docs/ --code-coverage=user docs/make.jl
run: julia --color=yes --project=docs/ --code-coverage=user docs/make.jl
- uses: julia-actions/julia-processcoverage@v1
with:
directories: src,lib/OptimizationBBO/src,lib/OptimizationCMAEvolutionStrategy/src,lib/OptimizationEvolutionary/src,lib/OptimizationGCMAES/src,lib/OptimizationMOI/src,lib/OptimizationMetaheuristics/src,lib/OptimizationMultistartOptimization/src,lib/OptimizationNLopt/src,lib/OptimizationNOMAD/src,lib/OptimizationOptimJL/src,lib/OptimizationOptimisers/src,lib/OptimizationPolyalgorithms/src,lib/OptimizationQuadDIRECT/src,lib/OptimizationSpeedMapping/src
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/Downgrade.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
version: ${{ matrix.julia-version }}
- uses: julia-actions/julia-downgrade-compat@v2
with:
skip: Pkg,TOML
skip: Pkg,TOML,LinearAlgebra,Logging,Printf,Random,SparseArrays,Test
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
with:
Expand Down
8 changes: 5 additions & 3 deletions .github/workflows/DowngradeSublibraries.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,16 @@ jobs:
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.julia-version }}
- if: ${{ matrix.project == 'lib/OptimizationQuadDIRECT' }}
run: julia --project -e 'using Pkg; Pkg.Registry.add(RegistrySpec(url = "https://github.com/HolyLab/HolyLabRegistry.git"));'
- uses: julia-actions/julia-downgrade-compat@v2
with:
project: ${{ matrix.project }}
skip: Pkg,TOML
projects: ${{ matrix.project }}
skip: Pkg,TOML,LinearAlgebra,Logging,Printf,Random,SparseArrays,Test
- uses: julia-actions/julia-buildpkg@v1
with:
project: ${{ matrix.project }}
- uses: julia-actions/julia-runtest@v1
with:
project: ${{ matrix.project }}
ALLOW_RERESOLVE: false
ALLOW_RERESOLVE: false
16 changes: 8 additions & 8 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "Optimization"
uuid = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
version = "5.0.0"
version = "5.1.0"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand All @@ -25,7 +25,7 @@ OptimizationOptimJL = {path = "lib/OptimizationOptimJL"}
OptimizationOptimisers = {path = "lib/OptimizationOptimisers"}

[compat]
ADTypes = "1.2"
ADTypes = "1.14"
Aqua = "0.8"
ArrayInterface = "7.10"
BenchmarkTools = "1"
Expand All @@ -50,18 +50,18 @@ Mooncake = "0.4.138"
Optim = ">= 1.4.1"
Optimisers = ">= 0.2.5"
OptimizationBase = "4"
OptimizationLBFGSB = "1"
OptimizationMOI = "0.5"
OptimizationOptimJL = "0.4"
OptimizationOptimisers = "0.3"
OptimizationLBFGSB = "1.2"
OptimizationMOI = "0.5.9"
OptimizationOptimJL = "0.4.7"
OptimizationOptimisers = "0.3.14"
OrdinaryDiffEqTsit5 = "1"
Pkg = "1"
Printf = "1.10"
Random = "1.10"
Reexport = "1.2"
Reexport = "1.2.2"
ReverseDiff = "1"
SafeTestsets = "0.1"
SciMLBase = "2.104"
SciMLBase = "2.122.1"
SciMLSensitivity = "7"
SparseArrays = "1.10"
Symbolics = "6"
Expand Down
21 changes: 16 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,16 +35,30 @@ installation of dependencies. Below is the list of packages that need to be
installed explicitly if you intend to use the specific optimization algorithms
offered by them:

- OptimizationAuglag for augmented Lagrangian methods
- OptimizationBBO for [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl)
- OptimizationCMAEvolutionStrategy for [CMAEvolutionStrategy.jl](https://github.com/jbrea/CMAEvolutionStrategy.jl)
- OptimizationEvolutionary for [Evolutionary.jl](https://github.com/wildart/Evolutionary.jl) (see also [this documentation](https://wildart.github.io/Evolutionary.jl/dev/))
- OptimizationGCMAES for [GCMAES.jl](https://github.com/AStupidBear/GCMAES.jl)
- OptimizationMOI for [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl) (usage of algorithm via MathOptInterface API; see also the API [documentation](https://jump.dev/MathOptInterface.jl/stable/))
- OptimizationIpopt for [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl)
- OptimizationLBFGSB for [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl)
- OptimizationMadNLP for [MadNLP.jl](https://github.com/MadNLP/MadNLP.jl)
- OptimizationManopt for [Manopt.jl](https://github.com/JuliaManifolds/Manopt.jl) (optimization on manifolds)
- OptimizationMetaheuristics for [Metaheuristics.jl](https://github.com/jmejia8/Metaheuristics.jl) (see also [this documentation](https://jmejia8.github.io/Metaheuristics.jl/stable/))
- OptimizationMOI for [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl) (usage of algorithm via MathOptInterface API; see also the API [documentation](https://jump.dev/MathOptInterface.jl/stable/))
- OptimizationMultistartOptimization for [MultistartOptimization.jl](https://github.com/tpapp/MultistartOptimization.jl) (see also [this documentation](https://juliahub.com/docs/MultistartOptimization/cVZvi/0.1.0/))
- OptimizationNLopt for [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) (usage via the NLopt API; see also the available [algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/))
- OptimizationNLPModels for [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl)
- OptimizationNOMAD for [NOMAD.jl](https://github.com/bbopt/NOMAD.jl) (see also [this documentation](https://bbopt.github.io/NOMAD.jl/stable/))
- OptimizationNonconvex for [Nonconvex.jl](https://github.com/JuliaNonconvex/Nonconvex.jl) (see also [this documentation](https://julianonconvex.github.io/Nonconvex.jl/stable/))
- OptimizationODE for optimization of steady-state and time-dependent ODE problems
- OptimizationOptimJL for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)
- OptimizationOptimisers for [Optimisers.jl](https://github.com/FluxML/Optimisers.jl) (machine learning optimizers)
- OptimizationPolyalgorithms for polyalgorithm optimization strategies
- OptimizationPRIMA for [PRIMA.jl](https://github.com/libprima/PRIMA.jl)
- OptimizationPyCMA for Python's CMA-ES implementation via [PythonCall.jl](https://github.com/JuliaPy/PythonCall.jl)
- OptimizationQuadDIRECT for [QuadDIRECT.jl](https://github.com/timholy/QuadDIRECT.jl)
- OptimizationSciPy for [SciPy](https://scipy.org/) optimization algorithms via [PythonCall.jl](https://github.com/JuliaPy/PythonCall.jl)
- OptimizationSophia for Sophia optimizer (second-order stochastic optimizer)
- OptimizationSpeedMapping for [SpeedMapping.jl](https://github.com/NicolasL-S/SpeedMapping.jl) (see also [this documentation](https://nicolasl-s.github.io/SpeedMapping.jl/stable/))

## Tutorials and Documentation
Expand Down Expand Up @@ -72,9 +86,6 @@ prob = OptimizationProblem(rosenbrock, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
```

Note that Optim.jl is a core dependency of Optimization.jl. However, BlackBoxOptim.jl
is not and must already be installed (see the list above).

*Warning:* The output of the second optimization task (`BBO_adaptive_de_rand_1_bin_radiuslimited()`) is
currently misleading in the sense that it returns `Status: failure (reached maximum number of iterations)`. However, convergence is actually
reached and the confusing message stems from the reliance on the Optim.jl output
Expand Down
8 changes: 5 additions & 3 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,13 @@ NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
NLPModelsTest = "7998695d-6960-4d3a-85c4-e1bceb8cd856"
NLopt = "76087f3c-5699-56af-9a33-bf431cd00edd"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb"
OptimizationBBO = "3e6eede4-6085-4f62-9a71-46d9bc1eb92b"
OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb"
OptimizationCMAEvolutionStrategy = "bd407f91-200f-4536-9381-e4ba712f53f8"
OptimizationEvolutionary = "cb963754-43f6-435e-8d4b-99009ff27753"
OptimizationGCMAES = "6f0a0517-dbc2-4a7a-8a20-99ae7f27e911"
OptimizationIpopt = "43fad042-7963-4b32-ab19-e2a4f9a67124"
OptimizationLBFGSB = "22f7324a-a79d-40f2-bebe-3af60c77bd15"
OptimizationMOI = "fd9f6733-72f4-499f-8506-86b2bdd0dea1"
OptimizationManopt = "e57b7fff-7ee7-4550-b4f0-90e9476e9fb6"
OptimizationMetaheuristics = "3aafef2f-86ae-4776-b337-85a36adf0b55"
Expand All @@ -49,12 +50,13 @@ Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[sources]
Optimization = {path = ".."}
OptimizationBase = {path = "../lib/OptimizationBase"}
OptimizationBBO = {path = "../lib/OptimizationBBO"}
OptimizationBase = {path = "../lib/OptimizationBase"}
OptimizationCMAEvolutionStrategy = {path = "../lib/OptimizationCMAEvolutionStrategy"}
OptimizationEvolutionary = {path = "../lib/OptimizationEvolutionary"}
OptimizationGCMAES = {path = "../lib/OptimizationGCMAES"}
OptimizationIpopt = {path = "../lib/OptimizationIpopt"}
OptimizationLBFGSB = {path = "../lib/OptimizationLBFGSB"}
OptimizationMOI = {path = "../lib/OptimizationMOI"}
OptimizationManopt = {path = "../lib/OptimizationManopt"}
OptimizationMetaheuristics = {path = "../lib/OptimizationMetaheuristics"}
Expand Down Expand Up @@ -87,8 +89,8 @@ NLPModels = "0.21"
NLPModelsTest = "0.10"
NLopt = "0.6, 1"
Optimization = "5"
OptimizationBase = "4"
OptimizationBBO = "0.4"
OptimizationBase = "4"
OptimizationCMAEvolutionStrategy = "0.3"
OptimizationEvolutionary = "0.4"
OptimizationGCMAES = "0.3"
Expand Down
4 changes: 2 additions & 2 deletions docs/src/API/ad.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The choices for the auto-AD fill-ins with quick descriptions are:
- `AutoTracker()`: Like ReverseDiff but GPU-compatible
- `AutoZygote()`: The fastest choice for non-mutating array-based (BLAS) functions
- `AutoFiniteDiff()`: Finite differencing, not optimal but always applicable
- `AutoModelingToolkit()`: The fastest choice for large scalar optimizations
- `AutoSymbolics()`: The fastest choice for large scalar optimizations
- `AutoEnzyme()`: Highly performant AD choice for type stable and optimized code
- `AutoMooncake()`: Like Zygote and ReverseDiff, but supports GPU and mutating code

Expand All @@ -21,7 +21,7 @@ OptimizationBase.AutoFiniteDiff
OptimizationBase.AutoReverseDiff
OptimizationBase.AutoZygote
OptimizationBase.AutoTracker
OptimizationBase.AutoModelingToolkit
OptimizationBase.AutoSymbolics
OptimizationBase.AutoEnzyme
ADTypes.AutoMooncake
```
2 changes: 1 addition & 1 deletion docs/src/API/modelingtoolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ optimization of code. Optimizers can better interface with the extra
symbolic information provided by the system.

There are two ways that the user interacts with ModelingToolkit.jl.
One can use `OptimizationFunction` with `AutoModelingToolkit` for
One can use `OptimizationFunction` with `AutoSymbolics` for
automatically transforming numerical codes into symbolic codes. See
the [OptimizationFunction documentation](@ref optfunction) for more
details.
Expand Down
8 changes: 4 additions & 4 deletions docs/src/examples/rosenbrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ flexibility of Optimization.jl. This is a gauntlet of many solvers to get a feel
for common workflows of the package and give copy-pastable starting points.

!!! note

This example uses many different solvers of Optimization.jl. Each solver
subpackage needs to be installed separate. For example, for the details on
the installation and usage of OptimizationOptimJL.jl package, see the
Expand All @@ -14,12 +14,12 @@ for common workflows of the package and give copy-pastable starting points.
The objective of this exercise is to determine the $(x, y)$ value pair that minimizes the result of a Rosenbrock function $f$ with some parameter values $a$ and $b$. The Rosenbrock function is useful for testing because it is known *a priori* to have a global minimum at $(a, a^2)$.
```math
f(x,\,y;\,a,\,b) = \left(a - x\right)^2 + b \left(y - x^2\right)^2
```
```

The Optimization.jl interface expects functions to be defined with a vector of optimization arguments $\bar{x}$ and a vector of parameters $\bar{p}$, i.e.:
```math
f(\bar{x},\,\bar{p}) = \left(p_1 - x_1\right)^2 + p_2 \left(x_2 - x_1^2\right)^2
```
```

Parameters $a$ and $b$ are captured in a vector $\bar{p}$ and assigned some arbitrary values to produce a particular Rosenbrock function to be minimized.
```math
Expand Down Expand Up @@ -164,7 +164,7 @@ sol = solve(prob, CMAEvolutionStrategyOpt())

```@example rosenbrock
using OptimizationNLopt, ModelingToolkit
optf = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
optf = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics())
prob = OptimizationProblem(optf, x0, _p)

sol = solve(prob, Opt(:LN_BOBYQA, 2))
Expand Down
10 changes: 5 additions & 5 deletions docs/src/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ The simplest copy-pasteable code using a quasi-Newton method (LBFGS) to solve th

```@example intro
# Import the package and define the problem to optimize
using Optimization, Zygote
using Optimization, OptimizationLBFGSB, Zygote
rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2
u0 = zeros(2)
p = [1.0, 100.0]

optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optf, u0, p)

sol = solve(prob, Optimization.LBFGS())
sol = solve(prob, OptimizationLBFGSB.LBFGS())
```

```@example intro
Expand Down Expand Up @@ -134,7 +134,7 @@ looks like:
using ForwardDiff
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, u0, p)
sol = solve(prob, BFGS())
sol = solve(prob, OptimizationOptimJL.BFGS())
```

We can inspect the `original` to see the statistics on the number of steps
Expand All @@ -157,7 +157,7 @@ We can demonstrate this via:
using Zygote
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
prob = OptimizationProblem(optf, u0, p)
sol = solve(prob, BFGS())
sol = solve(prob, OptimizationOptimJL.BFGS())
```

## Setting Box Constraints
Expand All @@ -170,7 +170,7 @@ optimization with box constraints by rebuilding the OptimizationProblem:

```@example intro
prob = OptimizationProblem(optf, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, BFGS())
sol = solve(prob, OptimizationOptimJL.BFGS())
```

For more information on handling constraints, in particular equality and
Expand Down
4 changes: 2 additions & 2 deletions docs/src/optimization_packages/mathoptinterface.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ the `maxtime` common keyword argument.
`OptimizationMOI` supports an argument `mtkize` which takes a boolean (default to `false`)
that allows automatic symbolic expression generation, this allows using any AD backend with
solvers or interfaces such as AmplNLWriter that require the expression graph of the objective
and constraints. This always happens automatically in the case of the `AutoModelingToolkit`
and constraints. This always happens automatically in the case of the `AutoSymbolics`
`adtype`.

An optimizer which supports the `MathOptInterface` API can be called
Expand Down Expand Up @@ -94,7 +94,7 @@ The following shows how to use integer linear programming within `Optimization`.
[Juniper documentation](https://github.com/lanl-ansi/Juniper.jl) for more
detail.
- The integer domain is inferred based on the bounds of the variable:

+ Setting the lower bound to zero and the upper bound to one corresponds to `MOI.ZeroOne()` or a binary decision variable
+ Providing other or no bounds corresponds to `MOI.Integer()`

Expand Down
2 changes: 1 addition & 1 deletion docs/src/optimization_packages/optim.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ using Optimization, OptimizationOptimJL, ModelingToolkit
rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
f = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
f = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics())
prob = Optimization.OptimizationProblem(f, x0, p)
sol = solve(prob, Optim.Newton())
```
Expand Down
8 changes: 4 additions & 4 deletions docs/src/optimization_packages/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ There are some solvers that are available in the Optimization.jl package directl
## Methods

- `LBFGS`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine accessed from the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. It directly supports box-constraints.

This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl.

```@docs
Expand All @@ -18,15 +18,15 @@ Optimization.Sophia

```@example L-BFGS

using Optimization, Zygote
using Optimization, OptimizationLBFGSB, Zygote

rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]

optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = Optimization.OptimizationProblem(optf, x0, p)
sol = solve(prob, Optimization.LBFGS())
sol = solve(prob, LBFGS())
```

### With nonlinear and bounds constraints
Expand All @@ -41,7 +41,7 @@ optf = OptimizationFunction(rosenbrock, AutoZygote(), cons = con2_c)
prob = OptimizationProblem(optf, x0, p, lcons = [1.0, -Inf],
ucons = [1.0, 0.0], lb = [-1.0, -1.0],
ub = [1.0, 1.0])
res = solve(prob, Optimization.LBFGS(), maxiters = 100)
res = solve(prob, LBFGS(), maxiters = 100)
```

### Train NN with Sophia
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials/certification.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This works with the `structural_analysis` keyword argument to `OptimizationProbl
We'll use a simple example to illustrate the convexity structure certification process.

```@example symanalysis
using SymbolicAnalysis, Zygote, LinearAlgebra, Optimization
using SymbolicAnalysis, Zygote, LinearAlgebra, Optimization, OptimizationLBFGSB

function f(x, p = nothing)
return exp(x[1]) + x[1]^2
Expand All @@ -16,7 +16,7 @@ end
optf = OptimizationFunction(f, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, [0.4], structural_analysis = true)

sol = solve(prob, Optimization.LBFGS(), maxiters = 1000)
sol = solve(prob, LBFGS(), maxiters = 1000)
```

The result can be accessed as the `analysis_results` field of the solution.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ x_1 * x_2 = 0.5
```

```@example constraints
optprob = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit(), cons = cons)
optprob = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics(), cons = cons)
prob = OptimizationProblem(optprob, x0, _p, lcons = [1.0, 0.5], ucons = [1.0, 0.5])
```

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/linearandinteger.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ objective = (u, p) -> (v = p[1:5]; dot(v, u))

cons = (res, u, p) -> (w = p[6:10]; res .= [sum(w[i] * u[i]^2 for i in 1:5)])

optf = OptimizationFunction(objective, Optimization.AutoModelingToolkit(), cons = cons)
optf = OptimizationFunction(objective, Optimization.AutoSymbolics(), cons = cons)
optprob = OptimizationProblem(optf,
zeros(5),
vcat(v, w);
Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials/remakecomposition.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The SciML interface provides a `remake` function which allows you to recreate th
Let's look at a 10 dimensional schwefel function in the hypercube $x_i \in [-500, 500]$.

```@example polyalg
using Optimization, Random
using Optimization, OptimizationLBFGSB, Random
using OptimizationBBO, ReverseDiff

Random.seed!(122333)
Expand All @@ -24,7 +24,7 @@ function f_schwefel(x, p = [418.9829])
return result
end

optf = OptimizationFunction(f_schwefel, Optimization.AutoReverseDiff(compile = true))
optf = OptimizationFunction(f_schwefel, AutoReverseDiff(compile = true))

x0 = ones(10) .* 200.0
prob = OptimizationProblem(
Expand All @@ -47,7 +47,7 @@ This is a good start can we converge to the global optimum?

```@example polyalg
prob = remake(prob, u0 = res1.minimizer)
res2 = solve(prob, Optimization.LBFGS(), maxiters = 100)
res2 = solve(prob, LBFGS(), maxiters = 100)

@show res2.objective
```
Expand Down
Loading
Loading