volkstuner is an open source hyperparameter tuner.
-
Deep learning framework agnostic
Your training code can be based on PyTorch, MXNet, TensorFlow, etc.
-
Task agnostic
You can tune hyperparameter for classification, semantic segmentation, object detection, to name a few.
-
Easy to use
You just need modify a few configurations in your your original training code.
This project is released under the Apache 2.0 license.
- Linux
- Python 3.7+
- PyTorch 1.1.0 or higher
- CUDA 9.0 or higher
We have tested the following versions of OS and softwares:
- OS: Ubuntu 16.04.6 LTS
- CUDA: 9.0
- Python 3.7.3
a. Create a conda virtual environment and activate it.
conda create -n volkstuner python=3.7 -y
conda activate volkstunerb. Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorchc. Clone the volkstuner repository.
git clone https://github.com/Media-Smart/volkstuner.git
cd volkstuner
volkstuner_root=${PWD}d. Install dependencies.
pip install -r requirements.txta. Config
Modify some configuration accordingly in the config file like configs/torch/cifar10/baseline.py. If you don't want to use all gpus, you can new a file ~/.volkstuner/resource.yml, then specify the gpu ids like gpu: [2, 3].
b. Run
python tools/auto.py configs/torch/cifar10/baseline.pySnapshots and logs will be generated at ${volkstuner_root}/workdir. The best hyperparameters will be stored in logs file.
a. Write your job in jobs folder, like jobs/pytorch/cifar10/trainval.py
b. Define your configuration in configs folder, like configs/torch/cifar10/baseline.py
This repository is currently maintained by Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).
We got a lot of code from autogluon.