Search

Learning to Rank Training

In this Learning to Rank Training, you will solve a ranking problem by integrating a machine learning system with your search engine. You will learn how to build a training set, train your model and test it both online and offline.
The Learning to Rank Training will cover Apache Solr Integration, Elasticsearch Integration, OpenSearch Integration and Vespa Integration. Choose the search technology you use.

Recorded

£400,00

If you are not able to attend public training, this is the best option for you. You will be able to take the course at your own pace and rhythm and learn whenever it fits your schedule and mood.
Only the Apache Solr and Elasticsearch Integrations are available as recorded.

  • Top expert trainers
  • Q&A by e-mail
  • Certificate of Attendance

Private

ask for quote

If you are looking for intensive sessions tailored to your (or your team’s) experience, then private training is your perfect choice!

  • In presence or Online
  • Tailored Training
  • Top Expert Trainers
  • Certificate of Attendance

Based on experience with leading companies including

Based on experience with leading companies including

universal
BBC
Alfresco
"The training has prepared me well to tackle my own project. It helped me to understand how to set up the project and which tools or algorithms I can use for it. The content of the training is quite compact, but not overloaded, so that there was also time for individual questions. I particularly liked the fact that Alessandro shared his experiences from older projects, which allowed him to point out potential problems."
Julia Silberberg
Jobware

PREREQUISITES

• Basic understanding of Search Engines and Machine Learning

WHAT YOU WILL LEARN

• How to integrate Machine Learning with your Search Engine to tune your relevance function;

• How to gather user feedback and prepare your training set;

• Ranking models life-cycle (Training and Deploy);

• How to test your ranking models Offline/Online.

INTENDED AUDIENCE

• Technical Managers
• Data scientists
• Software Engineers
• Developers
• Machine Learning passionates

Your Trainers

Alessandro Benedetti

APACHE LUCENE/SOLR COMMITTER
APACHE SOLR PMC MEMBER

Alessandro has been involved in designing and developing search-relevant solutions from 2010.
Over the years he has worked on various projects, with various open source technologies aiming to build search solutions able to satisfy the user information needs, often integrating such solutions with machine learning and artificial intelligence technologies.

Anna Ruggero

R&D Search Software Engineer, her focus is on the integration of Information Retrieval systems with advanced Machine Learning, Natural Language Processing and Data Mining algorithms. She likes to find new solutions that integrate her work as a Search Consultant with the latest Academia studies.

Topics

1. INTRODUCTION TO LEARNING TO RANK

The training begins by providing participants with a foundational understanding of Learning To Rank (LTR). This introductory module offers insights into the significance, applications, and fundamental concepts underlying LTR, setting the stage for deeper exploration.

2. Offline Learning To Rank (approaches and algorithms)

Following the introduction, the focus shifts to Offline Learning To Rank. Participants delve into various approaches and algorithms, understanding the mechanics, advantages, and limitations of each. Through real-world examples and case studies, learners gain practical insights into implementing these techniques effectively.

3. Online Learning To Rank (algorithms and state of the art)

Building on the offline concepts, the training then transitions to Online Learning To Rank. This module explores advanced algorithms and delves into the cutting-edge developments in the field. Participants will grasp the nuances of online ranking scenarios, algorithmic strategies, and emerging trends shaping the future of LTR.

4. Create a Training Set (feature engineering and relevance estimation)

A pivotal component of the training involves hands-on experience in crafting a training set. Participants will learn the intricacies of feature engineering, understanding how to select, extract, and transform relevant features for optimal ranking performance. Additionally, the module covers relevance estimation techniques, ensuring that the training data accurately represents the desired ranking outcomes.

5. Learning To Rank Metrics

The efficacy of LTR models hinges on appropriate evaluation metrics. This segment acquaints participants with a range of metrics tailored for assessing ranking quality. From precision-recall curves to NDCG (Normalized Discounted Cumulative Gain), learners will gain proficiency in selecting and interpreting metrics that align with specific application requirements.

6. Create a Test Set for Evaluation

To culminate the training, participants learn the critical task of creating a test set for evaluation purposes. This module enables them to see how to apply the acquired knowledge, validate model performance, and fine-tune ranking algorithms, ensuring robustness and reliability in real-world scenarios.

7. Available and most common learning to rank libraries and approaches

Explore a comprehensive overview of the diverse Learning to Rank (LTR) libraries and approaches widely used in the industry. Understand the strengths, limitations, and specific use cases associated with each library. Gain insights into the landscape of LTR methodologies, laying the foundation for informed decision-making in selecting the most suitable approach for your business needs.

8. Offline Testing for Business

Delve into the critical aspect of offline testing for business applications. Learn the methodologies and best practices for evaluating LTR models offline, ensuring robustness and effectiveness in simulated scenarios. Understand the significance of offline testing metrics and how they align with real-world business objectives, providing a solid foundation for model refinement and optimization.

9. Build a Training and Test Set with Practical Code

In this section, we guide you through code demonstrations on effectively splitting your dataset into training and test sets for model training. While we won’t delve into dataset construction, you’ll gain hands-on experience in the crucial step of partitioning data for optimal model preparation.

10. Model Evaluation Metrics

Explore a variety of model evaluation metrics used in assessing the performance of LTR models. Dive into the nuances of precision, recall, NDCG, and other key metrics, understanding how they reflect different aspects of ranking quality. Acquire the skills to interpret and choose appropriate evaluation metrics based on specific business goals, ensuring accurate and meaningful model assessment.

11. Train and Test the Model with Practical Code

Roll up your sleeves and engage in hands-on training to build, train, and test LTR models. Follow practical code examples that illustrate the implementation of popular algorithms and frameworks. Develop proficiency in the end-to-end process of model development, from training to testing, empowering you to apply these skills to real-world business scenarios.

12. Common Mistakes

Navigate through common pitfalls and challenges encountered in LTR projects. Identify and understand mistakes that can impact the effectiveness of your models. Equip yourself with strategies to troubleshoot issues, optimize model performance, and enhance the overall success of your LTR implementation.

13. Online Testing for Business

Extend your knowledge to the realm of online testing for LTR models in business contexts. Explore methodologies and strategies for conducting online evaluations, leveraging real-time user interactions to assess model performance dynamically. Understand the implications of online testing on user experience and business outcomes.

14. Model Explainability Exploiting the SHAP Library with Practical Code

Uncover the importance of model explainability in LTR and learn how to leverage the SHAP (SHapley Additive exPlanations) library to interpret and explain model decisions. Walk through practical code examples that demonstrate the application of SHAP values, providing transparency into the factors influencing ranking outcomes. Enhance your ability to communicate model insights effectively within your business context.

15. CHOOSE YOUR INTEGRATION

1. Apache Solr Integration
2. Features Management
3. Ranking Models Management
4. How to Rerank Search Results
5. Interleaving
6. Extract Features from the Results
7. Live Exercises

1. Elasticsearch integration
2. Features Management
3. Ranking Models Management
4. How to Rerank Search Results
5. Extract Features from the Results
6. Live Exercises

1. OpenSearch integration
2. Features Management
3. Ranking Models Management
4. How to Rerank Search Results
5. Extract Features from the Results
6. Live Exercises

1. Vespa integration
2. Features Management
3. Ranking Models Management
4. How to Rerank Search Results
5. Extract Features from the Results
6. Live Exercises

FAQ

The Learning To Rank training is available in private or recorded form.
The private form is available both online or in presence.

The Learning To Rank full training last around 12 hours.

For the recorded version, you will be able to email any questions you have. The private session has plenty of time for questions and answers.

Your teachers will be:
  – Alessandro Benedetti, Apache Lucene/Solr committer and Apache Solr PMC member.
 – Anna Ruggero, R&D Software Engineer at Sease.

Sure, at the end of the training you will receive a certificate of attendance by e-mail.

    Feel free to contact us

    Your email address will not be published. Required fields are marked *