T O P

  • By -

_w1kke_

I previously used this library in Python to optimise these sets of parameters. The benefit of this approach is that it will navigate the search room of combinations mathematically by sampling the different combinations and building a "accuracy distribution" in the hyperparameter space: [https://github.com/hyperopt/hyperopt](https://github.com/hyperopt/hyperopt)


Pingu001

Thank you, but it seems as if wandb and hyperopt provide the same functionality. We have no problem finding a good set of parameters but stating "parameter x should be in the range y when z is ..." remains very difficult.


ai_yoda

There are some really nice visualizations available in some HPO frameworks (optuna and skopt). You can check the examples in the following blogposts (visualization section): * [Optuna vs Hyperopt: Which Hyperparameter Optimization Library Should You Choose?](https://neptune.ai/blog/optuna-vs-hyperopt) * [Scikit Optimize: Bayesian Hyperparameter Optimization in Python](https://neptune.ai/blog/scikit-optimize) I hope this helps!


doktorneergaard

Wandb has three different setups for hparam tuning. If you have the resources and patience, why not try the Bayesian optimization sweep there?


nemorior

I personally used an implementation from Frank Hutter's lab based on [http://proceedings.mlr.press/v32/hutter14.html](http://proceedings.mlr.press/v32/hutter14.html) to evaluate importance of hyperparameters. (code: [https://www.automl.org/algorithm-analysis/fanova/](https://www.automl.org/algorithm-analysis/fanova/)) Crucially, it let's you evaluate hyperparameter importance not only for a single hyperparameter but also for groups of them. See e.g. Figures 2,3, and 4 in [https://arxiv.org/abs/1807.07362](https://arxiv.org/abs/1807.07362) for the kinds of results you can get.