Skip to content

Core

Core Building blocks for AutoML Tasks

AutoModel

Base model that defines hyperparameter search methods and initializes Ray. All other tasks are implementation of AutoModel.

Parameters:

Name Type Description Default
datamodule flash.DataModule

DataModule from Flash or PyTorch Lightning

required
max_epochs [int]

Maximum number of epochs for which model will train

required
max_steps Optional[int]

Maximum number of steps for each epoch. Defaults None.

required
optimization_metric str

Value on which hyperparameter search will run.

required
n_trials int

Number of trials for HPO

required
suggested_conf Dict

Any extra suggested configuration

required
timeout int

HPO will stop after timeout

required
prune bool

Whether to stop unpromising training.

required
tune_confs Dict

raytune configurations. See more at Ray docs.

required
best_trial bool

If true model will be loaded with best weights from HPO otherwise

required

hp_tune(self, name=None, ray_config=None, trainer_config=None, mode=None, gpu=0, cpu=None, resume=False)

Search Hyperparameter and builds model with the best params

    automodel = AutoClassifier(data)  # implements `AutoModel`
    automodel.hp_tune(name="gflow-example", gpu=1)

Parameters:

Name Type Description Default
name Optional[str]

name of the experiment.

None
ray_config Optional[dict]

configuration passed to ray.tune.run(...)

None
trainer_config Optional[dict]

configuration passed to pl.trainer.fit(...)

None
mode Optional[str]

Whether to maximize or mimimize the optimization_metric.

None
gpu Optional[float]

Amount of GPU resource per trial.

0
cpu Optional[float]

CPU cores per trial

None
resume bool

Whether to resume the training or not.

False

AutoClassifier

Implements AutoModel for classification tasks.

build_model(self, config)

Every Task implementing AutoClassifier has to implement a build model method that can build torch.nn.Module from dictionary config and return the model.


Last update: August 26, 2021