21 September, 2018

One Model to Rule Them All.


In this article, we presented a new model that seamlessly operates in the full range of unsupervised, semi-supervised and supervised learning contexts. The unsupervised learning target (for instance anomaly detection) is improved by any available label, while the supervised learning target (for instance classification) is vice versa improved by any datapoint, irrespective of it being labelled or not. Even if all data points are labelled, we have shown that the classification accuracy is improved compared to an equivalent supervised model, as the decoder acts effectively as a regularizer.

The business value of such a model is huge:

  • Due to the semi-supervised nature of the model, there are less labels needed to perform classification tasks. This adds a lot of value as the availability of labels is usually rare, and collecting them is both time and money intensive. The presented model hence facilitates on-boarding new customers to predictive systems greatly.
  • You only have to maintain one model even if you serve both unsupervised and supervised tasks in your company.
  • New customers can seamlessly be on-boarded to the full range of your products. Consider the situation where you start with a customer that does not have any label yet. You can still use the presented variational autoencoder model in this situation, where it will start building up useful representations of the customers data, and will start producing results for anomaly detection. Once the first labels come in, we can feed them into the same model, without having to re-configure or re-train yet another model for the classification task. On the contrary, the model will automatically leverage the unlabelled data points it was previously exposed to for the classification task from day one.
  • The model is expected to produce better results than specialized models for unsupervised and supervised tasks.
Our site uses cookies to help
personalize content and tailor
your experience. By continuing to
use this site, you are consenting
to our use of cookies.