We are pleased to announce the ninth London Information Retrieval Meetup, a free evening meetup aimed at Information Retrieval passionates and professionals who are curious to explore and discuss the latest trends in the field.
This time we go fully remote, given the COVID-19 situation and the impossibility of hosting the event live.
The evening will be structured with 2 technical talks, with a Q&A session after each talk.
After a short welcome & latest news speech from our Founder Alessandro Benedetti, we will proceed to the first talk.
first talk
Lucene-grep
I’m going to present my pet project Lucene-grep. Lucene-grep is a multi-platform command-line full-text search tool that leverages Lucene to get the work done and have some fun while doing it.
My motivation behind Lucene-grep is to bring the awesomeness of Lucene to new places while making it simple to install and start hacking. This is an interesting task given that Lucene is a Java library that is viewed as old and just too complicated to hack with.
In the live coding session, I will show how to turn Lucene-grep from a simple full-text search tool e.g.
“`
echo “Lucene is awesome” | lmgrep “Lucene”
“`
into an alternative for the Elasticsearch Percolator.
the speaker
Dainius Jocas
Dainius is a Staff Engineer at Vinted, the company whose mission is to make second-hand the first choice worldwide.
His career is filled with plenty of projects where the goal always was to make search engines find things and do it fast.
He is an enthusiast of open-source software. He has made a command-line search utility called Lucene-grep.
Outside of work he enjoys having fun with his lovely Bernese Mountain Dogs.
He’s based in Vilnius, Lithuania.
video
second talk
Gathering Multiple Ratings w/ Quepid
At Haystack 2019, Tito Sierra and Tara Diedrichsen made the case for a Human Rated Testing program in improving search.
In this talk we’ll share an update on adding support for multiple judgements from multiple raters to Quepid, an open source tool for supporting HRT programs. We’ll talk about some of the analytics to measure how aligned the raters are, and solicit feedback from the community on next steps.
Lastly, we’ll do some live rating with the audience, to demonstrate some of pitfalls of human judgements.
the speakers
Eric Pugh





