Exploring Solr Internals : The Lucene Inverted Index
Introduction
This blog post is about the Lucene Inverted Index and how Apache Solr internally works.
When playing with Solr systems, understanding and properly configuring the underline Lucene Index is fundamental to deeply control your search.
With a better knowledge of how the index looks like and how each component is used, you can build a more performant, lightweight and efficient solution.
Scope of this blog post is to explore the different components in the Inverted Index.
This will be more about data structures and how they contribute to provide Search related functionalities.
For low level approaches to store the data structures please refer to the official Lucene documentation [1].
Lucene Inverted Index
“In computer science, an inverted index (also referred to as postings file or inverted file) is an index data structure storing a mapping from content, such as words or numbers, to its locations in a database file, or in a document or a set of documents.”
In Memory / On Disk
Hands on !
Let’s assume in input 3 documents, each of them with 2 simple fields, and see how a full inverted Index will look like:
Doc0
  {  “id”:”c”,
    “title”:”video game history”
},
Doc1
  {  “id”:”a”,
    “title”:”game video review game”
},
Â
Doc2
  {  “id”:”b”,
    “title”:”game store”
},
Depending of the configuration in the schema.xml, at indexing time we generate the related data structure.
Let’s see the inverted index in the complete form, then let’s explain how each component can be used , and when to omit part of it.
Field | id | ||
---|---|---|---|
Ordinal | Term | Document Frequency | Posting List |
0 | a | 1 | 1Â : 1 : [1] : [0-1] |
1 | b | 1 | 2Â : 1 : [1] : [0-1] |
2 | c | 1 | 0Â : 1 : [1] : [0-1] |
Field | title | ||
---|---|---|---|
Ordinal | Term | Document Frequency | Posting List |
0 | game | 3 | 0 : 1 : [2] : [6-10], 1 : 2 : [1, 4] : [0-4, 18-22], 2Â : 1 : [1] : [0-4] |
1 | history | 1 | 0Â : 1 : [3] : [11-18] |
2 | review | 1 | 1Â : 1 : [3] : [11-17] |
3 | store | 1 | 2 : 1 : [2] : [5-10] |
4 | video | 2 | 0 : 1 : [1] : [0-5], 1 : 1 : [2] : [5-10], |
This sounds scary at the beginning let’s analyse the different component of the data structure.
Term Dictionary
Document Frequency
Posting List
0Â : 1 : [2] : [6-10],
1Â : 2 : [1, 4] : [0-4, 18-22],
2Â : 1 : [1] : [0-4]
Each element of this posting list is :
Document Ordinal : Term Frequency : [array of Term Positions] : [array of Term Offset] .
Document Ordinal -> The DocN Ordinal (Lucene ID) for the DocN document in the corpus containing the related term.
Never relies at application level on this ordinal, as it may change over time ( during segments merge for example).
e.g.
According to the starting ordinals of each Posting list element :
Doc0 0,
Doc1 1,
Doc2 2
contain the term game in the field title .
Term Frequency -> The number of occurrences of the term in the Posting List element .
e.g.
0Â : 1Â
Doc0 contains 1 occurrence of the term game in the field title.
1Â : 2
Doc1 contains 2 occurrence of the term game in the field title.
2Â : 1
Doc2 contains 1 occurrence of the term game in the field title.
Term Positions Array -> For each occurrence of the term in the Posting List element, it contains the related position in the field content .
e.g.
0 : [2]
Doc0 1st occurrence of the term game in the field title occupies the 2nd position in the field content.
( “title”:”video(1) game(2) history(3)” )
1Â : [1, 4]Â
Doc1 1st occurrence of the term game in the field title occupies the 1st position in the field content.
Doc1 2nd occurrence of the term game in the field title occupies the 4th position in the field content.
(Â “title”:”game(1) video(2) review(3) game(4)“Â )
2Â : [1]Â
Doc0 1st occurrence of the term game in the field title occupies the 1st position in the field content.
( “title”:”game(1) store(2)” )
Term Offsets Array -> Â For each occurrence of the term in the Posting List element, it contains the related character offset in the field content .
e.g.
0 :Â [6-10]
Doc0 1st occurrence of the term game in the field title starts at 6th char in the field content till 10th char ( excluded ).
( “title”:”video(1) game(2) …” )
( “title”:”01234 5 6789 10  …” )
1Â :Â [0-4, 18-22]
Doc1 1st occurrence of the term game in the field title starts at 0th char in the field content till 4th char ( excluded )..
Doc1 2nd occurrence of the term game in the field title starts at 18th char in the field content till 22nd char ( excluded ).
(Â “title”:”game(1)Â video(2) review(3)Â game(4)“Â )
(Â “title”:”0123Â 4Â Â video(2) review(3)Â 18192021 22″Â )
2Â :Â [0-4]
Doc0 1st occurrence of the term game in the field title occupies the 1st position in the field content.
( “title”:”game(1) store(2)” )
( “title”:”0123 4 store(2)” )
Live Documents
Ordinal | Alive |
---|---|
0 | 1 |
1 | 1 |
2 | 0 |
Alive -> A bit that specifies if the document is alive or deleted.
Norms
Norms is a data structure that provides length normalisation and boost factor per Field per Document.
The numeric values are associated per FIeld and per Document.
This value is associated to the length of the content value and possibly an Indexing time boost factor as well.
Norms are used when scoring Documents retrieved for a query.
In details it’s a way to improve the relevancy of Documents containing the term in short fields .
A boost factor can be associated at indexing time as well.
Short field contents will win over long field field contents when matching a query with Norms enabled.
Field | title |
---|---|
Doc Ordinal | Norm |
0 | 0.78 |
1 | 0.56 |
2 | 0.98 |
Schema Configuration
Lucene Index Option | Solr schema | Description | To Use When … |
---|---|---|---|
NONE | indexed=”false” | The inverted index will not be built. | You don’t need to search in your corpus of documents. |
DOCS | omitTermFreqAnd Positions=”true” | The posting list for each term will simply contain the document Ids ( ordinal) and nothing else. e.g. game -> 0,1,2 | – You don’t need to search in your corpus with phrase or positional queries. -You don’t need score to be affected by the number of occurrences of a term in a document field. |
DOCS_AND_FREQS | omitPositions=”true” | The posting list for each term will simply contain the document Ids ( ordinal) and term frequency in the document. e.g. game -> 0 : 1, 1 : 2 , 2 : 1 | – You don’t need to search in your corpus with phrase or positional queries. – You do need scoring to take Term Frequencies in consideration |
DOCS_AND_FREQS_AND_ POSITIONS | Default when indexed=”true” | The posting list for each term will contain the term positions in addition. e.g. 0 : 1 : [2], 1 : 2 : [1, 4], 2 : 1 : [1] | – You do need to search in your corpus with phrase or positional queries. – You do need scoring to take Term Frequencies in consideration |
DOCS_AND_FREQS_AND_ POSITIONS_AND_OFFSETS | storeOffsetsWithPositions =”true” | The posting list for each term will contain the term offsets in addition. Â e.g.game ->0Â : 1 : [2] : [6-10], | – You want to use the Posting Highlighter. A fast version of highlighting that uses the posting list instead of the term vector. |
omitNorms | omitNorms=”true” | The norms data structure will not be built | – You don’t need to boost short field contents – You don’t need Indexing time boosting per field |
Shameless plug for our training and services!
Did I mention we do Apache Solr Beginner and Elasticsearch Beginner training?
We also provide consulting on these topics, get in touch if you want to bring your search engine to the next level!
Subscribe to our newsletter
Did you like this post about The Lucene Inverted Index? Don’t forget to subscribe to our Newsletter to stay always updated from the Information Retrieval world!
Related
Author
Alessandro Benedetti
Alessandro Benedetti is the founder of Sease Ltd. Senior Search Software Engineer, his focus is on R&D in information retrieval, information extraction, natural language processing, and machine learning.
Comments (8)
Leave a comment Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Prasanna kumar
November 22, 2018Thanks for such a detailed article Alessandro 🙂
ashutosh
May 7, 2020thanks for such a great article. I have a query about Live documents. why Lucene retains documents deleted with alive status ‘0’ and actual deletion from memory happens only after a segment merge? what’s the advantage here?
Alessandro Benedetti
May 7, 2020Hi, one of the reasons Lucene is pretty fast in indexing (and searching to a certain extent) is the fact index segments are immutable.
Changing a bit in the live documents data structure make deletions visible really fast in search.
So it is unbalanced in favour of visibility in contract to the disk space that gets released just on segment merge.
Operating a full delete each time would imply a modification of the posting lists, containing that document.
Defintiely not a trivial task.
On segment merge, the posting lists are merged anyway, so at that point is much easier and quicker to claim back the space.
There may be many more reasons, but I hope this is a sufficient explanation!
ashutosh
May 8, 2020thanks for explanation.
Shyamala
June 4, 2020Hi, thanks for the article. I have a question. Is there any REST api avalible to query the inverted index contents of a field?(Note: the field is of type ‘keyword’ and the doc_values are disabled for it) Or is there anyway i can get the list of contents of a field(type ‘keyword’) without enabling the doc_values?
Alessandro Benedetti
June 4, 2020If you intend to retrieve the terms for a field, there are various ways in both Solr and Elasticsearch (REST Search Servers on top of Apache Lucene).
Lucene on its own is not a server and just a library.
In Apache Solr you can do : https://lucene.apache.org/solr/guide/8_5/the-terms-component.html
In Elasticsearch I would go with the term aggregation: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html
Or Term Vector endpoint: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-termvectors.html
Shyamala
June 5, 2020Thanks for your quick reply Alessandro Benedetti.
My business problem is “get all the distinct values/terms of a field (type: keyword)”.
As you suggested, I can do with elasticsearch terms aggregation only when the field has doc_values enabled. Somehow, I got the doc_values enabled and I’m trying to do term aggregation to solve same business problem. I got the list of unique values for my field. But, now the issue is, the elasticsearch is going through all the documents to fetch the list of unique values present in that index (confirmed by evaluating the response below) . But i have only around 3000 unique values for that field
“hits”: {
“total”: 14041450,
“max_score”: 0.0,
“hits”: []
}
Is there a way that I get the list of unique values of a field with hits limited to the number of unique values? (Because there will be lot of documents being added up on daily basis as the terms of the field could grow rapidly, in that case i do not want elasticsearch to go through all the documents just to get the list of unique values of the field)
teddie
July 2, 2020Thank you, truly a great post!