Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optim of Tomorrow #326

Closed
8 of 13 tasks
pkofod opened this issue Dec 22, 2016 · 6 comments
Closed
8 of 13 tasks

Optim of Tomorrow #326

pkofod opened this issue Dec 22, 2016 · 6 comments

Comments

@pkofod
Copy link
Member

pkofod commented Dec 22, 2016

aka Optim Roadmap 2.0 aka Make Optim Greater Still...

These lists are up for discussion, I just figured I would start that discussion now. JuliaLang v1.0 is approaching rapidly, with v0.6 feature freeze not far into the future. I am not sure we should tag Optim v1.0 the day JuliaLang starts shipping as non-beta, but it would be cool to have a lot of the current issues sorted out.

Let the number of issues guide you as to what I will tackle first :)

General

Documentation

  • General QA on the contents
  • Identify weak pages
  • Add references

Constrained

  • Read through grants.gov to find money for @timholy so he can take a sabbatical from his boring image lab, and work on his fun fun fun constrained optimizer instead! (@anriseth did it for free instead :) )

Solvers (up for grabs if the person is willing to "maintain")

In general I think we might want to delete feature/solver requests and have a single issue for them, as the requesters are rarely the ones willing to write up and maintain anyways, so the issues always end up idle.

More
... and more! I think 2017 is going to be a good year for Optim.

@cortner
Copy link
Contributor

cortner commented Dec 23, 2016

I'd like to add

  • rewrite line searches as types (I realise this is LineSearches.jl, but it can only go hand in hand with Optim.jl); see LineSearchOptions LineSearches.jl#9
  • an intelligent objective that avoids multiple evaluations; Multiple f evaluations for same state #282
  • a benchmark function that, given a small set of objectives, tests a variety of methods maybe with different parameters, and recommends the optimal configuration, which can then be reused for multiple future runs.

@pkofod
Copy link
Member Author

pkofod commented Dec 23, 2016

Those are good points.

Dispatch based linesearches
The first one has to go hand in hand with NLsolve as well I guess, @KristofferC . It would be great to make an interface such that it would be easy to call a solver here with a linesearch not in LineSearches.jl, and I think the dispatch based idea you have in mind does exactly that. This would to some degree separate the work here and in LineSearches, to the benefit of both parties.
Function evaluations
I think the second point relates to the DifferentiableFunction-rewrite. We could certainly make it so that value(df, x) only actually evaluated f it x was different from last time. This would solve it right?

function value(df, x) 
    if is_new(x, df.last_x)
        df.f_calls += 1
        df.f_x = df.f(x)
        copy!(df.last_x, x)
    end
    df.f_x
end

Benchmarks
The benchmarking-function sounds fun, but that should probably be started outside of Optim. It can also be started as a PR here, but I think it would need some work before we can merge is as an official part of this package. You are thinking something like recommend(problems) where problems is some collection of solver/starting point/parameter specifications? I guess it could also be based on CUTEst problems and more. That is, some sort of recommender system in a stored state based on the large problem collection, and then you could add your own, which would maybe have a relatively large weight attached and update the parameters of the recommender system. I'm not sure how well it would work in practice, but we could start the discussion over at JuliaML (gitter?).

@cortner
Copy link
Contributor

cortner commented Dec 24, 2016

Function evaluations
I think the second point relates to the DifferentiableFunction-rewrite. We could certainly make it so that value(df, x) only actually evaluated f it x was different from last time. This would solve it right?

Yes. A key advantage is that it would remove the need for passing function values into line searches and other things.

@ChrisRackauckas
Copy link
Contributor

ChrisRackauckas commented Feb 9, 2017

I would like to add:

  • Less strict typing of inputs.

For the native Julia methods, there's no reason to restrict to Array{T,N}. AbstractArrays, which by definition should have a linear index, seem like they would work fine in the methods that I checked. You wouldn't be able to do a type-check until v0.6 since it would require triangular dispatch. But I would really like to start optimizing some crazy things like MultiScaleArrays, and it seems the only thing blocking it is the type-checks requiring it to be a traditional array. Note that the same change should make it compatible with things like SharedArrays, which would be an early step to parallelism.

@timholy
Copy link
Contributor

timholy commented Aug 23, 2018

Congrats!! 🎆

@alexandrebrilhante
Copy link
Contributor

I will attempt to work on #272. I'll look at the C implementation in NLopt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants