Main Blog RRE

RRE-Enterprise: Evaluation Overview Dashboard

Welcome back to the RRE-Enterprise tutorial blog post series!

Today let’s focus on the Overview dashboard: a UI component that has the responsibility to give you a quick understanding of the search quality evaluation latest and historical status:

On the left end side, there’s the list of all the data collections you have historically evaluated.
The symbol associated with the collection will tell you in short, if the latest evaluation showed a general improvement, across all the metrics of interest.
This is the first signal and a very quick summary of how the latest evaluation ended up.

Selecting one (or more) collection of interest will open the widgets view, with all the metrics you selected at evaluation time.
At a glance, you see how the various metric scores changed from the latest evaluation.

If you are interested in viewing the historical track of the different search quality evaluations you run, you can show the history:

This gives you a better idea of the progress(or regression) you had over time.
Each collection is shown with a different colour and if you mouse over each point you see the time and score details.
It’s also possible to zoom, in case you want to see the progression more closely.

And that’s all for today!
The overview functionality is meant to be the most user-friendly approach to the search quality evaluation process, and it was designed to be as simple and concise as possible.
The ideal user of this section is a busy manager professional that needs a quick overview of the quality status of the different collections.
In the next blog post, we’ll cover the details of the Explore/Compare section, designed for software engineers working on a search project.

// our service

Shameless plug for our training and services!

Did I mention we do a Search Quality Evaluation training specifically designed for product managers?
(but we also cover the variant for software engineers)?
We provide consulting on these topics, get in touch if you want to set up the search quality evaluation pipeline for your search project!


Subscribe to our newsletter

Did you like this post about Drop constant features: a real-world Learning to Rank scenario? Don’t forget to subscribe to our Newsletter to stay always updated from the Information Retrieval world!


Alessandro Benedetti

Alessandro Benedetti is the founder of Sease Ltd. Senior Search Software Engineer, his focus is on R&D in information retrieval, information extraction, natural language processing, and machine learning.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.