Loading Events

« All Events

  • This event has passed.

London Information Retrieval Meetup [September 2021]

September 15 @ 3:00 am - 5:00 pm

We are so happy to announce the tenth London Information Retrieval Meetup, a free evening meetup aimed to Information Retrieval passionates and professionals who are curious to explore and discuss the latest trends in the field.

This time the meetup will be an hybrid event, both in-presence and online. The in-presence event will be a satellite event of Desires 2021 Design of Experimental Search & Information Retrieval Systems Conference.


Location: Department of Information Engineering University of Padua – Padua (Italy) | Aula magna Lepsky

Date: 15th September 2021 | 3:00-5:00 PM (GMT+1)


Registration required 

Date: 15th September 2021 | 3:00-5:00 PM (GMT+1)

The event will be structured with 2 technical talks, with Q&A session after each talk.



After a short welcome & latest news speech from our Founder Alessandro Benedetti, we will proceed to the first talk.

First talk




Alessandro has been involved in designing and developing search-relevant solutions from the early stages of Apache Solr 1.4 and edismax query parser in 2010. Over the years he has worked on various projects aiming to build search solutions able to satisfy the user information needs, often integrating such solutions with machine learning and artificial intelligence technologies.



Andrea Gazzarini is a curious software engineer, mainly focused on the Java language and Search technologies. With more than 15 years of experience in various software engineering areas, his adventure in the search world began in 2010, when he met Apache Solr and later Elasticsearch.


Rated Ranking Evaluator Enterprise: the next generation of free Search Quality Evaluation Tools

Every information retrieval practitioner has struggled with the task of evaluating how well a search engine is satisfying the user’s information needs.
Improving the correctness and effectiveness of a search system requires a set of tools that help to measure such aspects. 
Back in 2018, Rated Ranking Evaluator(RRE) came to the rescue.

RRE is an open-source search quality evaluation tool that can be used to produce a set of reports about the quality of a system, iteration after iteration, and that can be integrated within a continuous integration infrastructure to monitor quality metrics after each release. 

Many aspects remained problematic though:

– how to directly evaluate a middle layer search-API that communicates with Apache Solr or Elasticsearch?
– how to easily generate explicit and implicit ratings without spending hours on tedious json files?
– how to better explore the evaluation results? with nice widgets and interesting insights?

Rated Ranking Evaluator Enterprise solves these problems and much more.

Join us as we introduce the next generation of open-source search quality evaluation tools, exploring the internals and real-world scenarios!

Second talk



Giorgio Maria Di Nunzio is Associate Professor at the Department of Information Engineering of the University of Padua, Italy. 
His current research interests include the design of interactive machine learning models for the retrieval of medical and legal documents, and the evaluation of Technology-Assisted Review systems based on Continuous Active Learning. 
In particular, he is interested in the study of the prediction of the costs of high recall systems related to query reformulation and rank fusion.

Am I Missing Something? Query Performance Prediction by Means of Intent-Aware Metrics in Systematic Reviews

The study of query reformulation in Information Retrieval has driven a lot of interest in recent years. In fact, the performance of a system can greatly improve when the “right” formulation of an information need is selected or when the results of multiple formulations are fused together. 

One of the main challenges in this research area is being able to predict the best performing query (or queries) among the possible variations. A use case of query performance prediction is the systematic compilation of literature review. In fact, systematic reviews are scientific investigations that use strategies to include a comprehensive search of all potentially relevant articles. As time and resources are limited for compiling a systematic review, limits to the search are needed: for example, one may want to estimate how far the horizon of the search should be (i.e. all possible cases/documents that could exist in the literature) in order to stop before the resources are finished

In this talk, we analyze the advantages and drawbacks of intent-aware metrics used to estimate the amount of missing information for each query reformulation during a search session.


Sease Ltd


Via Gradenigo 6/b
Padua, 35131 Italy