The goal of this blog post is to highlight new vector search capabilities introduced in version 8.8.0, especially ESRE
Large Language Models (LLM) are ubiquitous nowadays, it’s the new big thing and everyone is talking about the greatest and latest! But one of the main challenges of starting a project that involves one or more language models is choosing the best for your domain of information. You shouldn’t neglect this step as it’s fundamental…
This blog post showcases the vector search improvements that have been introduced in the latest versions of Elasticsearch (8.6 and 8.7)
An end-to-end tutorial to implement Neural Search in Vespa. From documents and model preparation, to embeddings creation and k-NN queries.
In this blog post we present the available learning to rank Apache Solr features with a focus on categorical features and how to manage them.
How the FeatureLogger works? When the Feature Vector Cache is used in Solr? Is the cache speeding up the rerank process?
In the previous blogpost of this series, we looked at how to use BERT to improve search relevance by performing document re-ranking. The assumption of this approach is that the set of documents that need to be re-ranked, also known as candidates, contains the largest number of documents relevant to the query. We say that…
Neural Search in Apache Solr has been contributed by Sease thanks to Alessandro Benedetti, Apache Lucene/Solr committer, and Elia Porciani.
How a learning to rank query works in Solr? How we can obtain the required features extraction time from the Solr qTime parameter?
If you have attended our Artificial Intelligence in Search Training you should now be familiar with the use of Natural Language Processing and Deep Learning applied to search. If you have not, do not worry as we are planning to arrange another date and we will keep you posted through our newsletter, so make sure you subscribe. In the meantime, you can…