In this Learning to Rank Training you will Solve a ranking problem integrating machine learning system with your search engine. You will learn how to build a training set, train your model and test it both online and offline.
The Learning to Rank Training will cover Apache Solr Integration OR Elasticsearch Integration.
Skills you will gain
- How to integrate Machine Learning with your Search Engine to tune your relevance function
- How to gather user feedback and prepare your training set
- Ranking models life-cycle (Training and Deploy)
- How to test your ranking models Offline/Online
Prerequisites
- Basic understanding of Search Engines and Machine Learning
Intended Audience
Software Engineers, Data Scientists, Machine Learning passionates
Learning to Rank Training – Price
1500 GBP
early bird price : if you book before 31/01 – 1050 GBP
Trainer
Alessandro Benedetti is the founder of Sease Ltd.
Alessandro has been involved from the beginning to the Apache Solr Learning To Rank plugin developments carried over by Bloomberg.
Over the years he has worked in various Learning To Rank projects and solutions including the contribution to Apache Solr of the Interleaving capability.
Schedule
8 February 2021– 03:00 – 07:00 PM (GMT time)
9 February 2021 – 03:00 – 07:00 PM (GMT time)
10 February 2021 – 03:00 – 07:00 PM (GMT time)
Learning to Rank Training Programme
Introduction to LTR
- Offline Learning to Rank Techniques
- Core Concepts
- Algorithms
- State of the art
- Online Learning to Rank
- Core Concepts
- Algorithms
- State of the art
How to Build your Training Set
- Implicit Feedback
- Explicit Feedback
- Feature Engineering
- Feature level
- Feature type
- Categorical Features
- Missing values
- Relevance Label Estimation
- Click Modelling
- Train/Test/Validation Split
- Hands On Exercises
- Categorical Encoding
- Missing Values Count
- From interactions to training set
- Let’s split the training set
How to Train your Model
- Libraries Overview
- Ranklib
- XGBoost
- Hands On Exercises
- Let’s train a model using XGBoost
Evaluation and Explainability
- Offline Model Evaluation
- Metrics
- Open Source Tools
- Online Model Evaluation
- A/B Testing
- Interleaving
- Explain your Model
- Overview
- Open Source Libraries
- Hands On Exercises
- Let’s explain a model using TreeSHAP
Open Source Search Engines Integration
- Apache Solr Integration OR Elasticsearch Integration
- Features Management
- Ranking Models Management
- How to rerank search results
- Extract features from the results
- Interleaving (Apache Solr only)
- Hands On Exercises
- Upload Features definition and Models
- Run a re-ranking query
- Interleave two models in the results
- Extract features from the results
War Stories