
Faster Vector Search: Early Termination Strategy Now in Apache Solr
In this blog we explores an early termination strategy designed to make approximate k-NN search faster in Apache Solr.

In this blog we explores an early termination strategy designed to make approximate k-NN search faster in Apache Solr.

In this blog post, we provide a comprehensive overview of the features based on large language models (LLM) currently supported by OpenSearch.

This blog post explores GLiNER as a viable alternative to large language models (LLMs) for query parsing tasks.

This blog post explores how GLiNER works, highlighting its underlying architecture and how it differs from traditional NER models.

It explores an AI-powered Filter Assistant to improve User eXperience in navigating search results efficiently and effectively.

This blog post explains a workaround implemented in Solr’s CloudMLTQParser to handle fields populated via copyField.

This blog post explores whether to index a field in Apache Solr as a string or integer for optimal filter query performance.

This blog post explore a practical way to evaluate the performance of vector search results in Apache Solr.

Tips and tricks regarding Neo4J optimization (query optimization, memory and disk configuration).

It explores translating natural language queries into structured Solr queries using LLM and metadata to improve search and user experience.
We are Sease, an Information Retrieval Company based in London, focused on providing R&D project guidance and implementation, Search consulting services, Training, and Search solutions using open source software like Apache Lucene/Solr, Elasticsearch, OpenSearch and Vespa.
WhatsApp us