mlr3torch 0.2.0
Breaking Changes
- Removed some optimizers for which no fast (‘ignite’) variant
exists.
- The default optimizer is now AdamW instead of Adam.
- The private
LearnerTorch$.dataloader()
method now
operates no longer on the task
but on the
dataset
generated by the private
LearnerTorch$.dataset()
method.
- The
shuffle
parameter during model training is now
initialized to TRUE
to sidestep issues where data is
sorted.
- Optimizers now use the faster (‘ignite’) version of the optimizers,
which leads to considerable speed improvements.
- The
jit_trace
parameter was added to
LearnerTorch
, which when set to TRUE
can lead
to significant speedups. This should only be enabled for ‘static’
models, see the torch
tutorial for more information.
- Added parameter
num_interop_threads
to
LearnerTorch
.
- The
tensor_dataset
parameter was added, which allows to
stack all batches at the beginning of training to make loading of
batches afterwards faster.
- Use a faster default image loader.
Features
- Added
PipeOp
for adaptive average pooling.
- The
n_layers
parameter was added to the MLP
learner.
- Added multimodal melanoma and cifar{10, 100} example tasks.
- Added a callback to iteratively unfreeze parameters for
finetuning.
- Added different learning rate schedulers as callbacks.
Bug Fixes:
- Torch learners can now be used with
AutoTuner
.
- Early stopping now not uses
epochs - patience
for the
internally tuned values instead of the trained number of
epochs
as it was before.
- The
dataset
of a learner must no longer return the
tensors on the specified device
, which allows for parallel
dataloading on GPUs.
PipeOpBlock
should no longer create ID clashes with
other PipeOps in the graph (#260).
mlr3torch 0.1.2
- Don’t use deprecated
data_formats
anymore
- Added
CallbackSetTB
, which allows logging that can be
viewed by TensorBoard.
mlr3torch 0.1.1
- fix(preprocessing): regarding the construction of some
PipeOps
such as po("trafo_resize")
which
failed in some cases.
- fix(ci): tests were not run in the CI
- fix(learner):
LearnerTabResnet
now works correctly
- Fix that tests were not run in the CI
- feat: added the
nn()
helper function to simplify the
creation of neural network layers
mlr3torch 0.1.0