The matching ideas behind optimum transport (OT) play an more and more necessary function in machine studying, a development which may be noticed when OT is used to disambiguate datasets in purposes (e.g. single-cell genomics) or used to enhance extra complicated strategies (e.g. balanced consideration in transformers or self-supervised studying). To scale to more difficult issues, there’s a rising consensus that OT requires solvers that may function on tens of millions, not 1000’s, of factors. The low-rank optimum transport (LOT) strategy advocated in (Scetbon et al., 2021) holds a number of guarantees in that regard, and was proven to enhance extra established entropic regularization approaches, having the ability to insert itself in additional complicated pipelines, corresponding to quadratic OT. LOT restricts the seek for low-cost couplings to those who have a low-nonnegative rank, yielding linear time algorithms in instances of curiosity. Nevertheless, these guarantees can solely be fulfilled if the LOT strategy is seen as a reliable contender to entropic regularization in comparison on properties of curiosity, the place the scorecard usually consists of theoretical properties (statistical complexity and relation to different strategies) or sensible features (debiasing, hyperparameter tuning, initialization). We goal every of those areas on this paper so as to cement the affect of low-rank approaches in computational OT.