Skip to content

Engine

Backend

optimization_objective(config, trainer_config, finetune=False, gpu=0.0)

Defines lightning_objective function which is used by tuner to minimize/maximize the metric.

Parameters:

Name Type Description Default
config dict

key value pair of hyperparameters.

required
trainer_config dict

configurations passed directly to Lightning Trainer.

required
gpu Optional[float]

GPU per trial

0.0

AutoModel

Bases: BaseAutoModel, ABC

Base model that defines hyperparameter search methods and initializes Ray. All other autotasks are implementation of AutoModel.

Parameters:

Name Type Description Default
datamodule flash.DataModule

DataModule from Flash or PyTorch Lightning

None
max_epochs [int]

Maximum number of epochs for which model will train

10
max_steps Optional[int]

Maximum number of steps for each epoch. Defaults None.

None
optimization_metric str

Value on which hyperparameter search will run.

None
n_trials int

Number of trials for HPO

20
suggested_conf Dict

Any extra suggested configuration

None
timeout int

HPO will stop after timeout

600
prune bool

Whether to stop unpromising training.

True
backend_type Optional[str]

Training backend_type - PL / torch / fastai. Default is PL

required

hp_tune(name=None, ray_config=None, trainer_config=None, mode=None, gpu=0, cpu=None, resume=False, finetune=False)

Search Hyperparameter and builds model with the best params

    automodel = AutoClassifier(data)  # implements `AutoModel`
    automodel.hp_tune(name="gflow-example", gpu=1)

Parameters:

Name Type Description Default
name Optional[str]

name of the experiment.

None
ray_config dict

configuration passed to ray.tune.run(...)

None
trainer_config dict

configuration passed to pl.trainer.fit(...)

None
mode Optional[str]

Whether to maximize or minimize the optimization_metric.

None
gpu Optional[float]

Amount of GPU resource per trial.

0
cpu float

CPU cores per trial

None
resume bool

Whether to resume the training or not.

False
finetune bool

Whether to train the whole model or only finetune head layer.

False

AutoClassifier

Bases: AutoModel

Implements AutoModel for classification autotasks.

build_model(config) abstractmethod

Every Task implementing AutoClassifier has to implement a build model method that can build torch.nn.Module from dictionary config and return the model.


Last update: December 7, 2021