Autotasks
autotask(datamodule=None, train_dataloader=None, val_dataloader=None, num_classes=None, task=None, data_type=None, max_epochs=10, max_steps=10, n_trials=100, optimization_metric=None, suggested_backbones=None, suggested_conf=None, timeout=600, prune=True)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
Optional[DataModule]
|
PL Lightning DataModule with |
None
|
train_dataloader |
Optional[DataLoader]
|
torch dataloader |
None
|
val_dataloader |
Optional[DataLoader]
|
torch dataloader |
None
|
num_classes |
Optional[int]
|
number of classes |
None
|
task |
Optional[str]
|
type of task. Check available autotasks `availalbe_tasks() |
None
|
data_type |
Optional[str]
|
default=None. type of data - image, text or infer. |
None
|
max_epochs |
[int]
|
default=10. |
10
|
n_trials |
[int]
|
default=100. |
100
|
optimization_metric |
[Optional[str]]
|
defaults None |
None
|
suggested_backbones |
Union[List, str, None]
|
defaults None |
None
|
suggested_conf |
[Optional[dict] = None]
|
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
None
|
timeout |
[int]
|
Hyperparameter search will stop after timeout. |
600
|
Returns:
Type | Description |
---|---|
Implementation of |
available_tasks()
¶
Get a list of all available autotasks.
AutoSummarization
¶
Bases: AutoClassifier
Automatically finds Text Summarization Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
Optional[DataModule]
|
PL Lightning DataModule with |
required |
train_dataloader |
Optional[DataLoader]
|
torch dataloader |
required |
val_dataloader |
Optional[DataLoader]
|
torch dataloader |
required |
num_classes |
Optional[int]
|
number of classes |
required |
max_epochs |
[int]
|
default=10. |
required |
n_trials |
[int]
|
default=100. |
required |
optimization_metric |
[Optional[str]]
|
defaults None |
required |
suggested_backbones |
Union[List, str, None]
|
defaults None |
required |
suggested_conf |
[Optional[dict] = None]
|
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int]
|
Hyperparameter search will stop after timeout. |
required |
Examples:
from gradsflow import AutoSummarization
from flash.core.data.utils import download_data
from flash.text import SummarizationData, SummarizationTask
# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/xsum.zip", "data/")
# 2. Load the data
datamodule = SummarizationData.from_csv(
"input",
"target",
train_file="data/xsum/train.csv",
val_file="data/xsum/valid.csv",
test_file="data/xsum/test.csv",
)
model = AutoSummarization(datamodule,
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
build_model(config)
¶
Build SummarizationModel from ray.tune
hyperparameter configs
or via _search_space dictionary arguments
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str]
|
Image classification backbone name - |
required |
optimizer |
[str]
|
PyTorch Optimizers. Check |
required |
learning_rate |
[float]
|
Learning rate for the model. |
required |