
TABLE III
RESULTS
MovieLens 20M Netflix Prize Dataset
NCDG@100 Recall@20 Recall@50 NCDG@100 Recall@20 Recall@50
Mult-DAE 0.419 0.387 0.524 0.380 0.344 0.438
Mult-VAE 0.426 0.395 0.537 0.386 0.351 0.444
EASE 0.420 0.391 0.521 0.393 0.362 0.445
RecVAE 0.442 0.414 0.553 0.394 0.361 0.452
H+Vamp Gated 0.445 0.413 0.551 0.409 0.376 0.463
Neural EASE 0.431 0.403 0.532 0.395 0.363 0.447
FLVAE 0.445 0.409 0.547 0.398 0.363 0.450
VASP 0.448 0.414 0.552 0.406 0.372 0.457
We proposed a data augmentation method to prevent over-
fitting to identity and experimentally proved that using this
method leads to better performance of autoencoders used for
top-n recommendation.
We proposed a novel joint-learning technique for training
multiple models together. Using that we constructed VASP,
Variational Autoencoder with parallel Shalow Path and ex-
perimentally proved, that variational autoencoder connected
with parallel simple shallow linear model can match current
sophisticated SOTA models and even outperform them in some
cases.
ACKNOWLEDGMENT
Our research has been supported by the Grant
Agency of the Czech Technical University in Prague
(SGS20/213/OHK3/3T/18), the Czech Science Foundation
(GA
ˇ
CR 18-18080S), Recombee and VUSTE-APIS.
REFERENCES
[1] James Bennett and Stan Lanning. The Netflix Prize. KDD Cup and
Workshop, 2007.
[2] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar
Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai,
Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xi-
aobing Liu, and Hemal Shah. Wide & Deep Learning for Recommender
Systems. RecSys 2017 - Proceedings of the 11th ACM Conference on
Recommender Systems, pages 396–397, jun 2016.
[3] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of
recommender algorithms on top-N recommendation tasks. In RecSys’10
- Proceedings of the 4th ACM Conference on Recommender Systems,
2010.
[4] Ahlem Drif, Houssem Eddine Zerrad, and Hocine Cherifi. Ensvae:
Ensemble variational autoencoders for recommendations. IEEE Access,
8:188335–188351, 2020.
[5] Mart
´
ın Abadi et al. TensorFlow: Large-scale machine learning on
heterogeneous systems, 2015. Software available from tensorflow.org.
[6] F. Maxwell Harper and Joseph A. Konstan. The movielens datasets:
History and context. ACM Trans. Interact. Intell. Syst., 5(4), December
2015.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep resid-
ual learning for image recognition. Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 2016-
December:770–778, 2016.
[8] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q.
Weinberger. Densely connected convolutional networks. Proceedings
- 30th IEEE Conference on Computer Vision and Pattern Recognition,
CVPR 2017, 2017-January:2261–2269, 2017.
[9] George Karypis. Evaluation of item-based top-n recommendation
algorithms. In Proceedings of the tenth international conference on
Information and knowledge management, pages 247–254, 2001.
[10] Daeryong Kim and Bongwon Suh. Enhancing VAEs for collaborative
filtering: Flexible priors & gating mechanisms. In RecSys 2019 - 13th
ACM Conference on Recommender Systems, 2019.
[11] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes.
In 2nd International Conference on Learning Representations, ICLR
2014 - Conference Track Proceedings, 2014.
[12] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization
techniques for recommender systems. Computer, 2009.
[13] Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony
Jebara. Variational autoencoders for collaborative filtering. In The Web
Conference 2018 - Proceedings of the World Wide Web Conference,
WWW 2018, 2018.
[14] Tsung Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar.
Focal Loss for Dense Object Detection. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 42(2):318–327, 2020.
[15] Elizabeth Million. The hadamard product. Creative commons, 2007.
[16] Andrew Ng et al. Sparse autoencoder. CS294A Lecture notes,
72(2011):1–19, 2011.
[17] Tomas Rehorek, P Kordik, et al. Comparing offline and online evaluation
results of recommender systems. In In Proceedings of the REVEAL
workshop at RecSyS conference (RecSyS’18), 2018.
[18] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie.
AutoRec. In Proceedings of the 24th International Conference on World
Wide Web - WWW ’15 Companion, pages 111–112, New York, New
York, USA, 2015. ACM Press.
[19] Ilya Shenbin, Anton Alekseev, Elena Tutubalina, Valentin Malykh, and
Sergey I. Nikolenko. RecVAE: A new variational autoencoder for top-n
recommendations with implicit feedback. In WSDM 2020 - Proceedings
of the 13th International Conference on Web Search and Data Mining,
pages 528–536. Association for Computing Machinery, Inc, jan 2020.
[20] Harald Steck. Embarrassingly shallow autoencoders for sparse data.
In The Web Conference 2019 - Proceedings of the World Wide Web
Conference, WWW 2019, 2019.
[21] Harald Steck. Autoencoders that don’t overfit towards the identity.
Advances in Neural Information Processing Systems, 33, 2020.
[22] Panagiotis Symeonidis and Andreas Zioupos. Matrix and Tensor
Factorization Techniques for Recommender Systems. 2016.
[23] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine
Manzagol. Extracting and composing robust features with denoising
autoencoders. In Proceedings of the 25th international conference on
Machine learning, pages 1096–1103, 2008.
[24] Richard Zhang, Phillip Isola, and Alexei A Efros. Split-brain au-
toencoders: Unsupervised learning by cross-channel prediction. In
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 1058–1067, 2017.
[25] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep learning based
recommender system: A survey and new perspectives. ACM Computing
Surveys, 52(1), 2019.