New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement hyperparameter optimization via Gaussian search from scikit-optimize #1292
base: master
Are you sure you want to change the base?
Conversation
@@ -3451,6 +3451,137 @@ def randomized_search(self, param_distributions, X, y=None, cv=3, n_iter=10, par | |||
def _convert_to_asymmetric_representation(self): | |||
self._object._convert_oblivious_to_asymmetric() | |||
|
|||
def gaussian_search(self, param_distributions, X, y=None, cv=3, n_random_starts=10, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the interface should be the same as in random_search and grid_search except for parameter distribution
partition_random_seed=None, n_jobs=1, const_params={}, to_minimize_objective=True, | ||
refit=True, train_size=0.8, verbose=True, plot=False): | ||
import skopt | ||
if n_calls<= 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please follow the codestyle everywhere
@@ -3451,6 +3451,137 @@ def randomized_search(self, param_distributions, X, y=None, cv=3, n_iter=10, par | |||
def _convert_to_asymmetric_representation(self): | |||
self._object._convert_oblivious_to_asymmetric() | |||
|
|||
def gaussian_search(self, param_distributions, X, y=None, cv=3, n_random_starts=10, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs are needed for this function, so that the user will understand the sense of all the parameters
|
||
self.model.set_params(**self.const_params) | ||
self.model.set_params(**params_dict) | ||
if self.search_by_train_test_split: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Train-test split must be done once for the whole parameter tuning process. Quantization must be done once or if quantization parameters are among the optimized ones, then it must be done every time when quantization changes.
self.model.set_params(**params_dict) | ||
if self.search_by_train_test_split: | ||
if isinstance(self.X, Pool): | ||
self.X = self.X.get_features() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will not work in case if you have quantized pool, if there are categorical features or if there are texts.
fold_count=self.cv, | ||
partition_random_seed=self.partition_random_seed) | ||
result = list(result["test-" + self.loss_function + "-mean"])[-1] | ||
if not self.to_minimize_objective: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This value should not be specifier by the user, this is always fully specified by the evaluation metric.
|
||
from skopt.utils import use_named_args | ||
from skopt import gp_minimize | ||
from skopt.space import Real, Categorical, Integer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove all unused imports
@@ -3451,6 +3451,137 @@ def randomized_search(self, param_distributions, X, y=None, cv=3, n_iter=10, par | |||
def _convert_to_asymmetric_representation(self): | |||
self._object._convert_oblivious_to_asymmetric() | |||
|
|||
def gaussian_search(self, param_distributions, X, y=None, cv=3, n_random_starts=10, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since the method requires skopt params, let's name it skopt_parameter_search
random_state=None, n_calls=100, search_by_train_test_split=True, | ||
partition_random_seed=None, n_jobs=1, const_params={}, to_minimize_objective=True, | ||
refit=True, train_size=0.8, verbose=True, plot=False): | ||
import skopt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please import only the things that are used inside the method
params=params_dict, | ||
fold_count=self.cv, | ||
partition_random_seed=self.partition_random_seed) | ||
result = list(result["test-" + self.loss_function + "-mean"])[-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will work not for all loss_functions, plus loss_function might be None in this code
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en
This pull request is a part of a task of the ML Engineering course in the YDSA.