Query-level features and under-sampled queries, how to handle them? Find it out, with our new Learning to Rank implementations
How eDismax sow parameter works The sow(split on whitespace) is an eDismax query parser parameter [1] that regulates aspects of query time text analysis that impact how the user query is parsed and the internal Lucene query is built.It is particularly relevant in multi-term and multi-field search.If sow=true : first the user query text is…
This blog post aims to illustrate how to generate the query Id and how to manage the creation of the Training Set
This blog post is about several analysis on a LTR model and its explanation using the open source library SHAP
In this blog post, the elasticsearch _source field is compared with stored fields and docvalues from a performance point of view
Introduction With Rated Ranking Evaluator Enterprise approaching soon, we take the occasion of explaining in details why Offline Search Quality Evaluation is so important nowadays and what you can do already with the Rated Ranking Evaluator open-source libraries.More news will come soon as we are approaching the V1 release date.Stay tuned! Search Quality Evaluation Evaluation…
This blog post aims to illustrate step by step a Learning to Rank project on a Daily Song Ranking problem using open source libraries.
Interleaving is an online evaluation approach for ranking functions, contributed to Apache Solr Learning to Rank by Sease.
In this post we describe what is an Intervals Table and how to build it using a Behaviour-Driven-Development (BDD) approach.
Introduction A common problem with machine learning models is their interpretability and explainability.We create a dataset and we train a model to achieve a task, then we would like to understand how the model obtains those results. This is often quite difficult to understand, especially with very complex models. In this blog post, I would…