-
In the single objective case: Ideas: Of course I can use the bruteforce sampler but I would love to have an optimization towards the optimum, albeit with a boundary condition that I do not intend to reach it, but have several good but DIFFERENT candidates. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
However, what is your assumption about your search space? That there are a lot of different good solutions of similar quality and those are located in local optima? |
Beta Was this translation helpful? Give feedback.
-
re callback: I get how I would ensure a "good" candidate (via the objective threshold as you say) but I have no idea how I would tackle the "different" part. I can access the study during a callback, so I can access all trials. I have no idea though how I would go about defining the "distance between two trials in terms of the parameter space" ... My assumptions on the search space are exactly as you say. Lot's of near-optimal local optimal solutions. I further assume the search space is vast and the global optimum exists, but is pointless to search because it's improvement over the local optima is neglible in contrast to the cost of finding it. |
Beta Was this translation helpful? Give feedback.
-
So I've been thinking about it and don't know if optuna is the right tool for that? I mean it is a multi objective optimisation, where one goal is a high obj. value and the other is finding good but different parameters. I thought of some constant distance measure |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for these thoughts. I agree with everything you say. As you mention GA: I had this idea as well. But I do not want to start again from scratch. Completely offtopic but: Do you know of a python GA framework I could look into? Again, thank you very much for the discussion! |
Beta Was this translation helpful? Give feedback.
Thank you very much for these thoughts. I agree with everything you say.
If I use the BruteForceSampler I am guaranteed to get a spread of parameters, but then I am not optimizing! optuna is still an amazing tool to have a study, trials, parameter sampling and the execution of the loop ... but I am not using it like intended.
If I use TPE I will be unhappy with the parameter spread because the sampler constantly wants to converge.
What I would need would be a switch to say the sampler: That point is locally optimal, please widen your search again. No idea if that is possible. Also no idea if TPE would do that automatically if I constraint-discard like 50 trials in a row. There is an issue…