Auto Tasks
AutoImageClassifier
¶
Automatically find Image Classification Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
[DataModule] |
PL Lightning DataModule with |
required |
max_epochs |
[int] |
default=10. |
required |
n_trials |
[int] |
default=100. |
required |
optimization_metric |
[Optional[str]] |
defaults None |
required |
suggested_backbones |
Union[List, str, None] |
defaults None |
required |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
required |
Examples:
from flash.core.data.utils import download_data
from flash.image import ImageClassificationData
from gradsflow import AutoImageClassifier
# 1. Create the DataModule
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", "./data")
datamodule = ImageClassificationData.from_folders(
train_folder="data/hymenoptera_data/train/",
val_folder="data/hymenoptera_data/val/",
)
model = AutoImageClassifier(datamodule,
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
build_model(self, config)
¶
Build ImageClassifier model from ray.tune
hyperparameter configs
or via config dictionary arguments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str] |
Image classification backbone name - resnet18, resnet50,... |
required |
optimizer |
[str] |
PyTorch Optimizers. Check |
required |
learning_rate |
[float] |
Learning rate for the model. |
required |
AutoTextClassifier
¶
Automatically find Text Classification Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
[DataModule] |
PL Lightning DataModule with |
required |
max_epochs |
[int] |
default=10. |
required |
n_trials |
[int] |
default=100. |
required |
optimization_metric |
[Optional[str]] |
defaults None |
required |
suggested_backbones |
Union[List, str, None] |
defaults None |
required |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
required |
Examples:
from gradsflow import AutoTextClassifier
from flash.core.data.utils import download_data
from flash.text import TextClassificationData
download_data("https://pl-flash-data.s3.amazonaws.com/imdb.zip", "./data/")
datamodule = TextClassificationData.from_csv(
"review",
"sentiment",
train_file="data/imdb/train.csv",
val_file="data/imdb/valid.csv",
)
model = AutoTextClassifier(datamodule,
suggested_backbones=['sgugger/tiny-distilbert-classification'],
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
build_model(self, config)
¶
Build TextClassifier model from ray.tune
hyperparameter configs
or via config dictionary arguments
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str] |
Image classification backbone name - resnet18, resnet50,... |
required |
optimizer |
[str] |
PyTorch Optimizers. Check |
required |
learning_rate |
[float] |
Learning rate for the model. |
required |
AutoSummarization
¶
Automatically finds Text Summarization Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
datamodule |
[DataModule] |
PL Lightning DataModule with |
required |
max_epochs |
[int] |
default=10. |
required |
n_trials |
[int] |
default=100. |
required |
optimization_metric |
[Optional[str]] |
defaults None |
required |
suggested_backbones |
Union[List, str, None] |
defaults None |
required |
suggested_conf |
[Optional[dict] = None] |
This sets Trial suggestions for optimizer, learning rate, and all the hyperparameters. |
required |
timeout |
[int] |
Hyperparameter search will stop after timeout. |
required |
Examples:
from gradsflow import AutoSummarization
from flash.core.data.utils import download_data
from flash.text import SummarizationData, SummarizationTask
# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/xsum.zip", "data/")
# 2. Load the data
datamodule = SummarizationData.from_csv(
"input",
"target",
train_file="data/xsum/train.csv",
val_file="data/xsum/valid.csv",
test_file="data/xsum/test.csv",
)
model = AutoSummarization(datamodule,
max_epochs=10,
optimization_metric="val_accuracy",
timeout=300)
model.hp_tune()
build_model(self, config)
¶
Build SummarizationModel from ray.tune
hyperparameter configs
or via config dictionary arguments
Parameters:
Name | Type | Description | Default |
---|---|---|---|
backbone |
[str] |
Image classification backbone name - |
required |
optimizer |
[str] |
PyTorch Optimizers. Check |
required |
learning_rate |
[float] |
Learning rate for the model. |
required |