Learning to rank (LTR from now on) is the application of machine learning techniques, typically supervised, in the formulation of ranking models for information retrieval systems.
With LTR becoming more and more popular (Apache Solr supports it from Jan 2017 and Elasticsearch has an Open Source plugin released in 2018), organizations struggle with the problem of how to evaluate the quality of the models they train.
This talk explores all the major points in both Offline and Online evaluation.
Setting up correct infrastructures and processes for a fair and effective evaluation of the trained models is vital for measuring the improvements/regressions of a LTR system.
The talk is intended for:
– Product Owners, Search Managers, Business Owners
– Software Engineers, Data Scientists, and Machine Learning Enthusiast
Expect to learn :
- the importance of Offline testing from a business perspective
- how Offline testing can be done with Open Source libraries
- how to build a realistic test set from the original data set in input avoiding common mistakes in the process
- the importance of Online testing from a business perspective
- A/B testing and Interleaving approaches: details and Pros/ Cons
- common mistakes and how they can false the obtained results
Join us as we explore real world scenarios and dos and don’ts from the e-commerce industry!