Autotasks
autotask(datamodule=None, train_dataloader=None, val_dataloader=None, num_classes=None, task=None, data_type=None, max_epochs=10, max_steps=10, n_trials=100, optimization_metric=None, suggested_backbones=None, suggested_conf=None, timeout=600, prune=True)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
Optional[DataModule] |
PL Lightning DataModule with |
None |
train_dataloader |
Optional[DataLoader] |
torch dataloader |
None |
val_dataloader |
Optional[DataLoader] |
torch dataloader |
None |
num_classes |
Optional[int] |
number of classes |
None |
task |
Optional[str] |
type of task. Check available autotasks `availalbe_tasks() |
None |
data_type |
Optional[str] |
default=None. type of data - image, text or infer. |
None |
max_epochs |
[int] |
default=10. |
10 |
n_trials |
[int] |
default=100. |
100 |
optimization_metric |
[Optional[str]] |
defaults None |
None |
suggested_backbones |
Union[List, str, None] |
defaults None |
None |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
None |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
600 |
Returns:
Type | Description |
---|---|
Implementation of |
available_tasks()
¶
Get a list of all available autotasks.
image
¶
AutoImageClassifier (AutoClassifier)
¶
Automatically find Image Classification Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
Optional[DataModule] |
PL Lightning DataModule with |
required |
train_dataloader |
Optional[DataLoader] |
torch dataloader |
required |
val_dataloader |
Optional[DataLoader] |
torch dataloader |
required |
num_classes |
Optional[int] |
number of classes |
required |
max_epochs |
[int] |
default=10. |
required |
n_trials |
[int] |
default=100. |
required |
optimization_metric |
[Optional[str]] |
defaults None |
required |
suggested_backbones |
Union[List, str, None] |
defaults None |
required |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
required |
backend_type |
Optional[str] |
Training loop code. Defaults to None. |
required |
Examples:
from flash.core.data.utils import download_data
from flash.image import ImageClassificationData
from gradsflow import AutoImageClassifier
# 1. Create the DataModule
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", "./data")
datamodule = ImageClassificationData.from_folders(
train_folder="data/hymenoptera_data/train/",
val_folder="data/hymenoptera_data/val/",
)
model = AutoImageClassifier(datamodule,
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
build_model(self, config)
¶
Build ImageClassifier model from ray.tune
hyperparameter configs
or via _search_space dictionary arguments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str] |
Image classification backbone name - resnet18, resnet50,... |
required |
optimizer |
[str] |
PyTorch Optimizers. Check |
required |
learning_rate |
[float] |
Learning rate for the model. |
required |
text
special
¶
text
¶
AutoTextClassifier (AutoClassifier)
¶
Automatically find Text Classification Model
Examples:
from gradsflow import AutoTextClassifier
from flash.core.data.utils import download_data
from flash.text import TextClassificationData
download_data("https://pl-flash-data.s3.amazonaws.com/imdb.zip", "./data/")
datamodule = TextClassificationData.from_csv(
"review",
"sentiment",
train_file="data/imdb/train.csv",
val_file="data/imdb/valid.csv",
)
model = AutoTextClassifier(datamodule,
suggested_backbones=['sgugger/tiny-distilbert-classification'],
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
Optional[DataModule] |
PL Lightning DataModule with |
required |
train_dataloader |
Optional[DataLoader] |
torch dataloader |
required |
val_dataloader |
Optional[DataLoader] |
torch dataloader |
required |
num_classes |
Optional[int] |
number of classes |
required |
max_epochs |
[int] |
default=10. |
required |
n_trials |
[int] |
default=100. |
required |
optimization_metric |
[Optional[str]] |
defaults None |
required |
suggested_backbones |
Union[List, str, None] |
defaults None |
required |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
required |
build_model(self, config)
¶
Build TextClassifier model from ray.tune
hyperparameter configs
or via _search_space dictionary arguments
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str] |
Image classification backbone name - resnet18, resnet50,... |
required |
optimizer |
[str] |
PyTorch Optimizers. Check |
required |
learning_rate |
[float] |
Learning rate for the model. |
required |
AutoSummarization (AutoClassifier)
¶
Automatically finds Text Summarization Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
Optional[DataModule] |
PL Lightning DataModule with |
required |
train_dataloader |
Optional[DataLoader] |
torch dataloader |
required |
val_dataloader |
Optional[DataLoader] |
torch dataloader |
required |
num_classes |
Optional[int] |
number of classes |
required |
max_epochs |
[int] |
default=10. |
required |
n_trials |
[int] |
default=100. |
required |
optimization_metric |
[Optional[str]] |
defaults None |
required |
suggested_backbones |
Union[List, str, None] |
defaults None |
required |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
required |
Examples:
from gradsflow import AutoSummarization
from flash.core.data.utils import download_data
from flash.text import SummarizationData, SummarizationTask
# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/xsum.zip", "data/")
# 2. Load the data
datamodule = SummarizationData.from_csv(
"input",
"target",
train_file="data/xsum/train.csv",
val_file="data/xsum/valid.csv",
test_file="data/xsum/test.csv",
)
model = AutoSummarization(datamodule,
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
build_model(self, config)
¶
Build SummarizationModel from ray.tune
hyperparameter configs
or via _search_space dictionary arguments
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str] |
Image classification backbone name - |
required |
optimizer |
[str] |
PyTorch Optimizers. Check |
required |
learning_rate |
[float] |
Learning rate for the model. |
required |