Skip to content

Releases: ludwig-ai/ludwig

v0.5.5

02 Aug 19:29
Compare
Choose a tag to compare

What's Changed

  • Bump Ludwig From v0.5.4 -> v0.5.5 by @arnavgarg1 in #2340
    • Bug fix: Use safe rename which works across filesystems when writing checkpoints
    • Fixed default eval_batch_size when setting batch_size=auto
    • Update R2 score to handle single sample computation

Full Changelog: v0.5.4...v0.5.5

v0.5.4

12 Jul 21:31
6dec969
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.5.3...v0.5.4

v0.5.3

25 Jun 19:50
d404a67
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.5.2...v0.5.3

v0.5.2

08 Jun 21:06
dd026ca
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.5.1...v0.5.2

v0.5.1

23 May 21:44
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.5...v0.5.1

v0.5: Declarative Machine Learning, now on PyTorch

10 May 18:20
721f6e7
Compare
Choose a tag to compare

Ludwig v0.5 is a complete renovation of Ludwig from the ground up with a focus on parity, scalability, deployment, reliability, and documentation. Ludwig v0.5 migrates our entire backend from TensorFlow to PyTorch and introduces several new features and technical improvements, including:

  • Step-based training and evaluation to enable frequent sub-epoch monitoring of model health and evaluation metrics. This is particularly useful for large datasets that may be trained using large models.
  • Data balancing: upsampling and downsampling during preprocessing to better proportioned datasets.
  • End-to-end torchscript to support low-level optimized model deployment, including preprocessing and post-processing, to go directly from example to predictions.
  • Ludwig on Ray with RayDatasets enabling significant training speed boosts for reading large datasets while training Ludwig models on a Ray cluster.
  • The addition of MLPMixer and ViTEncoder as image encoders for state-of-the-art deep learning on image data.
  • AutoML for tabular and text classification, integrated with distributed hyperparameter search using RayTune.
  • Scalability optimizations with Dask, Modin, and Ray, enabling Ludwig to preprocess, train, and evaluate over datasets hundreds of gigabytes in size in tens of minutes.
  • Config validation using marshmallow schemas revealing configuration typos or bad values early and increasing reliability.
  • More tests. We've quadrupled the number of unit tests and end-to-end integration tests and we've expanded our CI testing to run in distributed and GPU settings. This strengthens Ludwig's stability and helps build confidence in new changes going forward.

Our team is thoroughly invested in improving the declarative ML experience, and, as part of the v0.5 release, we've revamped the getting started guide, user guide, and developer documentation. We've also published a handful of end-to-end tutorials with thoroughly documented notebooks on text, tabular, image, and multimodal classification that provide a deep walkthrough of Ludwig's functionality.

Migrating to PyTorch

Ludwig's migration to PyTorch comes from a substantial 6 month undertaking involving 230+ commits, changes to 70k+ lines of code, and contributions from 40+ people.

PyTorch's pythonic design and emphasis on developer experience are well-aligned with Ludwig's principles of simplicity, modularity, and extensibility. Switching to use PyTorch as Ludwig’s backend of choice was strongly motivated by the increase in productivity in development, debugging, and iteration that the more pythonic PyTorch API affords us as well as the great ecosystem the PyTorch community has built around it. With Ludwig on PyTorch, we're thrilled to see what developers, researchers, and data scientists in the PyTorch and broader deep learning community can bring to Ludwig.

Feature and Performance Parity

Over the last several months, we've moved all Ludwig encoders, combiners, decoders, and metrics for every data modality that Ludwig supports, as well as all of the backend infrastructure on Horovod and Ray, to PyTorch.

At the same time, we wanted to make sure that the experience of Ludwig users continues to be performant and delightful. We've run extensive comparisons between Ludwig v0.5 (PyTorch-based) and Ludwig v0.4 on text, image, and tabular datasets, evaluating training speed, inference throughput, and model performance, to verify that there's been no degradation.

Our results reveal roughly the same high GPU utilization (~90%) on several datasets with significant improvements in distributed training speed and memory usage without impacting model accuracy nor time to convergence. We'll be publishing a blog with more details on benchmarking soon.

New Features

In addition to the PyTorch migration, Ludwig v0.5 is packed with new functionality, features, and additional changes that make v0.5 the most feature-rich and robust release of Ludwig yet.

Step-based training and evaluation

Ludwig's train loop is epoch-based by default, with one round of evaluation per epoch (one pass through the dataset).

for epoch in num_epochs:
	for batch in training_data.batches:
		train(batch)
        save_model(model_dir)
	evaluation(training_data)
        evaluation(validation_data)
        evaluation(test_data)
        print_results()

This is an appropriate fit for tabular datasets, which are small, fit in memory, and train quickly. However, this can be awkward for unstructured datasets, which tend to be much larger, and train more slowly due to larger models. Now, with step-based training and evaluation, users can configure a more frequent sub-epoch evaluation cadence to more regularly monitor metrics and model health.

Use steps_per_checkpoint to run evaluation every N training steps, or checkpoints_per_epoch to run evaluation N times per epoch.

trainer:
    steps_per_checkpoint: 1000
trainer:
    checkpoints_per_epoch: 2

Note that it is invalid to specify both checkpoints_per_epoch and steps_per_checkpoint simultaneously.

To further speed up evaluation, users can skip evaluation on the training set by setting evaluate_training_set to False.

trainer:
    evaluate_training_set: false

Data balancing

Users working with imbalanced datasets can specify an oversampling or undersampling parameter which will balance the data during preprocessing.

In this example, Ludwig will oversample the minority class to achieve a 50% representation in the overall dataset.

preprocessing:
    oversample_minority: 0.5

In this example, Ludwig will undersample the majority class to achieve a 70% representation in the overall dataset.

preprocessing:
    undersample_majority: 0.7

Data balancing is only supported for binary output classes. Specifying both parameters at the same time is also not supported.
When developing models, it can be useful to iterate quickly with a smaller portion of the dataset. Ludwig supports this with a new preprocessing parameter, sample_ratio, which subsamples the dataset.

preprocessing:
    sample_ratio: 0.7

End-to-end torchscript

Users can export trained ludwig models to torchscript with ludwig export_torchscript.

ludwig export_torchscript –model=/path/to/model

Models that use number, category, and text binary features now support torchscript-compatible preprocessing, enabling end-to-end torchscript compilation.

inputs = {
    'cat_feature': ['foo', 'bar']
    'num_feature': torch.tensor([42, 7])
    'bin_feature1': torch.tensor([True, False])
    'bin_feature2': ['No', 'Yes']
}

scripted_model = model.to_torchscript()
output = scripted_model(inputs)

End to end torchscript compilation is also supported for text features that use torchscript-enabled torchtext tokenizers. We are actively working on adding support for other data types.

AutoML for Text Classification

In v0.4, we introduced experimental AutoML functionalities into Ludwig.

Ludwig AutoML automatically creates deep learning models given a dataset, its label column, and a time budget. Ludwig AutoML infers the input and output feature types, chooses the model architecture, and specifies the parameters and ranges across which to perform hyperparameter search.

auto_train_results = ludwig.automl.auto_train(
   dataset=my_dataset_df,
   target=target_column_name,
   time_limit_s=7200,
   tune_for_memory=False
)

Our initial AutoML work focused on tabular datasets, since good performance on such datasets is a current area of interest in the DL community. In v0.5, we expand on this work to develop and validate Ludwig AutoML for text classification.

Config validation against Marshmallow Schemas

The combiner and trainer sections of Ludwig configurations are now validated against official Marshmallow schemas. This centralizes documentation, flags configuration typos or bad values, and helps catch regressions.

Better Test Coverage

We've quadrupled the number of unit and integration tests and we've established new testing guidelines for well-tested contributions going forward. This strengthens Ludwig's stability, iterability, and helps build confidence in new changes.

Backward Compatibility

Despite all of the code changes...

Read more

v0.5rc2

07 Mar 20:56
37048d7
Compare
Choose a tag to compare
v0.5rc2 Pre-release
Pre-release

Fixes loss reporting consistency issues, and shape-based metric calculation errors with SET output features.

v0.5rc1

10 Feb 20:13
19f0f4d
Compare
Choose a tag to compare
v0.5rc1 Pre-release
Pre-release

Migration to PyTorch.

​​v0.4.1: Ray training, Ray datasets, experimental AutoML with auto config generation integrated with hyperopt on RayTune, image improvements, Python3.9/TF2.7

01 Feb 07:28
Compare
Choose a tag to compare

Summary

This release features experimental AutoML with auto config generation and auto-training integrated with hyperopt on RayTune, and integrations with Ray training and Ray datasets. We're still working on a comprehensive overhaul of the documentation, and all the new functionality will all available in the upcoming v0.5 too.

Aside from critical bugs and new datasets, v0.4.1 will be the last release of Ludwig using TensorFlow. Starting with v0.5+ (release coming soon), Ludwig will use PyTorch as the backend for tensor computation. We will release a blogpost detailing the rationale and impact of this decision, but we wanted to do one last TensorFlow release to make sure that all those committed to a TensorFlow ecosystem that have used Ludwig so far could enjoy the benefits of many bug fixes and improvements we did on the codebase that were not specific to PyTorch.

The next version v0.5 will also have several additional improvements that we’ll be excited to share in the coming weeks.

Additions

Improvements

  • Allow logging params to mlflow from any epoch by @tgaddair in #1211
  • Changed remote fs behavior to upload at the end of each epoch by @tgaddair in #1210
  • Add metric and loss modules for RMSE, RMSPE, and AUC by @ANarayan in #1214
  • [hyperopt] fixed metric_score to use test split when available by @tgaddair in #1239
  • Fixed metric selection to ignore config split if unavailable by @tgaddair in #1248
  • Ray Tune Intermediate Checkpoint Cleaning by @ANarayan in #1255
  • Do not initialize Ray if already initalized by @Yard1 in #1277
  • Changed default combiner to concat from tabnet by @ShreyaR in #1278
  • Ray data migration by @ShreyaR in #1260
  • Fix automl to treat binary as categorical when missing values present by @tgaddair in #1292
  • Add serialization for DatasetInfo and round avg_words to int by @hungcs in #1294
  • Cast max_length to int in build_sequence_matrix::pad by @Yard1 in #1295
  • [automl] update model config parameter ranges by @ANarayan in #1298
  • Change INFER_IMAGE_DIMENSIONS default to True by @hungcs in #1303
  • Add HTTPS retries for image urls by @hungcs in #1304
  • Return None for unreadable images and try to infer num channels by @hungcs in #1307
  • Add gray image/avg image fallbacks for unreachable images by @hungcs in #1312
  • Account for image extensions during image type inference by @hungcs in #1335
  • Fixed schema validation to handle null preprocessing values for strings by @tgaddair in #1344
  • Added default size and output_size for tabnet by @tgaddair in #1355
  • Removed DaskBackend and moved tests to RayBackend by @tgaddair in #1412
  • Perform preprocessing first before hyperopt when possible by @tgaddair in #1415
  • Employ a fallback str2bool mapping from the feature column's distinct values when the feature's values aren't boolean-like. by @justinxzhao in #1471
  • Remove trailing dot in income label field in adult_census… by @amholler in #1475
  • Update Ludwig AutoML Feature Type Selection by @amholler in #1485
  • Update infer_type tests to reflect interface and functionality updates by @amholler in #1493
  • Skip converting to TensorDType if the column is binary by @tgaddair in #1547
  • Remove TensorDType conversion for all scalar types by @tgaddair in #1560
  • Update AutoML tabular model type choice to remove heuristic for concat by @amholler in #1548
  • Better handle empty fields with distinct_values=[] by @hungcs in #1574
  • Port #1476 ('dict' option for weights_initializer and bias_initializer) to tf_legacy by @ksbrar in #1599
  • Modify combiners to accept input_features as a dict instead of a list by @jeffreyftang in #1618
  • Update hyperopt: Choose best model from validation data; For stopped Ray Tune trials, run evaluate at search end by @amholler in #1612
  • Keep search_alg type in dict to record in hyperopt_statistics.json by @amholler in #1626
  • For ames_housing, remove test.csv from processing; it has no label column which prevents test split eval by @amholler in #1634
  • Improve Ludwig resilience to Ray Tune issues by @amholler in #1660
  • Handle download gzip files by @amholler in #1676
  • Upgrade tf from 2.5.2 to 2.7.0. by @justinxzhao in #1713
  • Add basic precommit to tf-legacy to pass precommit checks on tf-legacy PRs. by @justinxzhao in #1718
  • For kdd datasets, do not include unlabeled test data by default by @amholler in #1704
  • Use config which has been previously validated by @vreyespue in #1213
  • Update Readme to activate directly the virtualenv by @vreyespue in #1212
  • doc: Correct README.md link to Developer Guide by @jimthompson5802 in #1217
  • Update pandas version by @w4nderlust in #1223
  • Modify Kaggle datasets to not process test sets by @ANarayan in #1233
  • Restructure dataframe preprocessing setup and change to avoid creatin… by @amholler in #1240

Bug fixes

Read more

v0.4: Distributed processing and training with Ray and Dask, Distributed hyperopt with RayTune, TabNet, Remote FS, MLflow for monitoring and serving, new Datasets

15 Jun 04:22
Compare
Choose a tag to compare

Changelog

Additions

  • Integrate ray tune into hyperopt (#1001)
  • Added Ames Housing Kaggle dataset (#1098)
  • Added functionality to obtain subtrees in the SST dataset (#1108)
  • Added comparator combiner (#1113)
  • Additional Text Classification Datasets (#1121)
  • Added Ray remote backend and Dask distributed preprocessing (#1090)
  • Added TabNet combiner and needed modules (#1062)
  • Added Higgs Boson dataset (#1157)
  • Added GitHub workflow to push to Docker Hub (#1160)
  • Added more tagging schemes for Docker images (#1161)
  • Added Docker build matrix (#1162)
  • Added category feature > 1 dim to TabNet (#1150)
  • Added timeseries datasets (#1149)
  • Add TabNet Datasets (#1153)
  • Forest Cover Type, Adult Census Income and Rossmann Store Sales datasets (#1165)
  • Added KDD Cup 2009 datasets (#1167)
  • Added Ray GPU image (#1170)
  • Added support for cloud object storage (S3, GCS, ADLS, etc.) (#1164)
  • Perform inference with Dask when using the Ray backend (#1128)
  • Added schema validation to config files (#1186)
  • Added MLflow experiment tracking support (#1191)
  • Added export to MLflow pyfunc model format (#1192)
  • Added MLP-Mixer image encoder (#1178)
  • Added TransformerCombiner (#1177)
  • Added TFRecord support as a preprocessing cache format (#1194)
  • Added higgs boson tabnet examples (#1209)

Improvements

  • Abstracted Horovod params into the Backend API (#1080)
  • Added allowed_origins to serving to support to allow cross-origin requests (#1091)
  • Added callbacks to hook into the training loop programmatically (#1094)
  • Added scheduler support to Ray Tune hyperopt and fixed GPU usage (#1088)
  • Ray Tune: enforced that epochs equals max_t and early stopping is disabled (#1109)
  • Added register_trainable logic to RayTuneExecutor (#1117)
  • Replaced Travis CI with GitHub Actions (#1120)
  • Split distributed tests into separate test suite (#1126)
  • Removed unused regularizer parameter from training defaults
  • Restrict docker built GA to only ludwig-ai repos (#1166)
  • Harmonize return object for categorical, sequence generator and sequence tagger (#1171)
  • Sourcing images from either file path or in-memory ndarrays (#1174)
  • Refactored hyperopt results into object structure for easier programmatic usage (#1184)
  • Refactored all contrib classes to use the Callback interface (#1187)
  • Improved performance of Dask preprocessing by adding parallelism (#1193)
  • Improved TabNetCombiner and Concat combiner (#1177)
  • Added additional backend configuration options (#1195)
  • Made should_shuffle configurable in Trainer (#1198)

Bugfixes

  • Fix SST parentheses issue
  • Fix serve.py adding a try around the form parsing (#1111)
  • Fix #1104: add lengths to text encoder output with updated unit test (#1105)
  • Fix sst2 substree logic to match glue sst2 dataset (#1112)
  • Fix #1078: Avoid recreating cache when using image preproc (#1114)
  • Fix checking is dask exists in figure_data_format_dataset
  • Fixed bug in EthosBinary dataset class and model directory copying logic in RayTuneReportCallback (#1129)
  • Fix #1070: error when saving model with image feature (#1119)
  • Fixed IterableBatcher incompatibility with ParquetDataset and remote model serialization (#1138)
  • Fix: passing backend and TF config parameters to model load path in experiment
  • Fix: improved TabNet numerical stability + refactoring
  • Fix #1147: passing bn_epsilon to AttentiveTransformer initialization in TabNet
  • Fix #1093: loss value mismatch (#1103)
  • Fixed CacheManager to correctly handle test_set and validation_set (#1189)
  • Fixing TabNet sparsity loss issue (#1199)

Breaking changes

Most models trained with v0.3.3 would keep working in v0.4.
The main changes in v0.4 are additional options, so what worked previously should not be broken now.
One exception to this is that now there is a much strictier check of the validity of the model configuration.
This is great as it allows to catch errors earlier, although configurations that despite errors worked in the past may not work anymore.
The checks should help identify the issues in the configurations though, so errors should be easily ficable.

Contributors

@tgaddair @jimthompson5802 @ANarayan @kaushikb11 @mejackreed @ronaldyang @zhisbug @nimz @kanishk16