
Faster Vector Search: Early Termination Strategy Now in Apache Solr
In this blog we explores an early termination strategy designed to make approximate k-NN search faster in Apache Solr.

In this blog we explores an early termination strategy designed to make approximate k-NN search faster in Apache Solr.

In this blog post, we provide a comprehensive overview of the features based on large language models (LLM) currently supported by OpenSearch.

Sease will talk at the upcoming conference Search Solutions and Tutorial 2025, hosted in London by the BCS group.

In this blog post, we examine the ColBERT paper, which adapts deep learning models, in particular, BERT, for efficient retrieval.

Let’s announce the twenty-fifth edition of the London Information Retrieval & AI Meetup, a hybrid event about Information Retrieval.

This blog post explores GLiNER as a viable alternative to large language models (LLMs) for query parsing tasks.

This blog post explores how GLiNER works, highlighting its underlying architecture and how it differs from traditional NER models.

Imagine stumbling upon a shiny platform that claims to offer “done-for-you” searches with AI-powered relevance and conversational chat, all rolled into one, perfectly working out

Explore Semantic Highlighting feature in OpenSearch v3.0, how it works, and how it compares to the Sease Solr Neural Highlighting plugin.

It explores an AI-powered Filter Assistant to improve User eXperience in navigating search results efficiently and effectively.
We are Sease, an Information Retrieval Company based in London, focused on providing R&D project guidance and implementation, Search consulting services, Training, and Search solutions using open source software like Apache Lucene/Solr, Elasticsearch, OpenSearch and Vespa.
WhatsApp us