Apache Lucene BlendedInfixSuggester : How It Works, Bugs And Improvements

The Apache Lucene/Solr suggesters are important to Sease : we explored the topic in the past[1] and we strongly believe the autocomplete feature to be vital for a lot of search applications.
This blog post explores in details the current status of the Lucene BlendedInfixSuggester, some bugs of the most recent version ( with the solution attached) and some possible improvements.

BlendedInfixSuggester

The BlendedInfixSuggester is an extension of the AnalyzingInfixSuggester with the additional functionality to weight prefix matches of your query across the matched documents.
It scores higher if a hit is closer to the start of the suggestion.
N.B. at the current stage only the first term in your query will affect the suggestion score

Let’s see some of the configuration parameters from the official wiki:

  • blenderType: used to calculate the positional weight coefficient using the position of the first matching word. Can be one of:
    • position_linear: weightFieldValue*(1 – 0.10*position): Matches to the start will be given a higher score (Default)
    • position_reciprocal: weightFieldValue/(1+position): Matches to the start will be given a score which decays faster than linear
    • position_exponential_reciprocal: weightFieldValue/pow(1+position,exponent): Matches to the start will be given a score which decays faster than reciprocal
      • exponent: an optional configuration variable for the position_reciprocal blenderType used to control how fast the score will increase or decrease. Default 2.0.
Description
Data Structure Auxiliary Lucene Index
Building For each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType and then additionally EdgeNgram token filtered.

Finally an auxiliary index is built with those tokens.

Lookup strategy The query is analysed according to the suggestAnalyzerFieldType.

Than a phrase search is triggered against the Auxiliary Lucene index

The suggestions are identified starting at the beginning of each token in the field content.

Suggestions returned The entire content of the field .

This suggester is really common nowadays as it allows to provide suggestions in the middle of a field content, taking advantage of the analysis chain provided with the field.

It will be possible in this way to provide suggestions considering synonymsstop words, stemming and any other token filter used in the analysis and match the suggestion based on internal tokens.
Finally the suggestion is scored, based on the position match.

The simple corpus of document for the examples will be the following :

[
      {
        "id":"44",
        "title":"Video gaming: the history"},
      {
        "id":"11",
        "title":"Nowadays Video games are a phenomenal economic business"},
      {
        "id":"55",
        "title":"The new generation of PC and Console Video games"},
      {
        "id":"33",
        "title":"Video games: multiplayer gaming"}]

And a simple synonym mapping : multiplayer, online

Let’s see some example :

Query to autocomplete Suggestions Explanation
“gaming”
  • “Video gaming: the history”
  • “Video game: multiplayer gaming”
  • “Nowadays Video games are a phenomenal economic business”
The input query is analysed, and the tokens produced are the following : “game” .

In the Auxiliary Index , for each of the field content we have the EdgeNgram tokens:

“v”,”vi”,”vid”… , “g”,”ga”,”gam”,“game” .

So the match happens and the suggestion are returned.
N.B. First two suggestions are ranked higher as the matched term happen to be closer to the start of the suggestion

Let’s explore the score of each Suggestion given various Blender Types :

Query gaming
Suggestion First Position Match Position Linear Position Reciprocal Position Exponential Reciprocal
Video gaming: the history 1 1-0.1*position = 0.9 1/(1+position) = 1/2 = 0.5 1/(1+position)^2 = 1/4 = 0.25
Video game: multiplayer gaming 1 1-0.1*position = 0.9 1/(1+position) = 1/2 = 0.5 1/(1+position) = 1/4 = 0.25
Nowadays Video games are a phenomenal economic business 2 1-0.1*position = 0.8 1/(1+position) = 1/3 = 0.3 1/(1+position) = 1/9 = 0.1

The final score of the suggestion will be :

long score = (long) (weight * coefficient)

N.B. the reason I highlighted the data type is because it’s directly affecting the first bug we discuss.

Suggestion Score Approximation

The optional weightField parameter is extremely important for the Blended Infix Suggester.
It assigns the value of the suggestion weight ( extracted from the mentioned field).
e.g.
The suggestion may come from the product name field, but the suggestion weight depends on how profitable the product suggested is.

<str name=”field”>productName</str>
<str name=”weightField”>profit</str>

So far, so good, but unfortunately there are two problems with that.

Bug 1 – WeightField Not Defined -> Zero suggestion score

How To Reproduce It : Don’t define any weightField in the suggester config
Effect : the suggestion ranking is lost, all the suggestions have 0 score, position of the match doesn’t matter anymore
The weightField is not a mandatory configuration for the BlendedInfixSuggester.
Your use case could not involve any weight for your suggestions and you are just interested in the positional scoring (the main reason the BlendedInfixSuggester exists in the first place).
Unfortunately, this is not possible at the moment :
If the weightField is not defined, each suggestion will have a weight of 0.
This is because the weight associated to each document in the document dictionary is a long. If the field to extract the weight from, is not defined (null), the weight returned will just be 0.
This doesn’t allow to differentiate when a weight should be 0 ( value extracted from the field) or null ( no value at all ).
A solution has been proposed here[3].

Bug 2 – Bad Approximation Of Suggestion Score For Small Weights

There is a misleading data type casting in the score calculation for the suggestion :

long score = (long) (weight * coefficient)

This apparently innocent cast, actually brings very nasty effects if the weight associated to a suggestion is unitary or small enough.

Weight =1
Video gaming: the history
1-0.1*position = 0.9 * 1 =cast= 0
1/(1+position) = 1/2 = 0.5 * 1 =cast= 0
1/(1+position)^2 = 1/4 = 0.25 * 1 =cast= 0

Weight =2
Video gaming: the history
1-0.1*position = 0.9 * 2=cast= 1
1/(1+position) = 1/2 = 0.5 * 2=cast= 1
1/(1+position)^2 = 1/4 = 0.25 * 2=cast= 0

Basically you risk to lose the ranking of your suggestions reducing the score to only few possible values : 0 or 1 ( in edge cases)

A solution has been proposed here[3]

Multi Term Matches Handling

It is quite common to have multiple terms in the autocomplete query, so your suggester should be able to manage multiple matches in the suggestion accordingly.

Given a simple corpus (composed just by the following suggestions) and the query :
“Mini Bar Frid” 

You see these suggestions:

  • 1000 | Mini Bar something Fridge
  • 1000 | Mini Bar something else Fridge
  • 1000 | Mini Bar Fridge something
  • 1000 | Mini Bar Fridge something else
  • 1000 | Mini something Bar Fridge

This is because at the moment, the first matching term wins all ( and the other positions are ignored).
This brings a lot of possible ties (1000), that should be broken to give the user a nice and intuitive ranking.

But intuitively I would expect in the results something like (note that allTermsRequired=true and the schema weight field always returns 1000)

  • Mini Bar Fridge something
  • Mini Bar Fridge something else
  • Mini Bar something Fridge
  • Mini Bar something else Fridge
  • Mini something Bar Fridge

Let’s see a proposed Solution [4] :

Positional Coefficient

Instead of taking into account just the first term position in the suggestion, it’s possible to use all the matching positions from the matched terms [“mini”,”bar”,”fridge”].
Each position match will affect the score with :

  • How much the matched term position is distant from the ideal position match
    • Query : Mini Bar Fri, Ideal Positions : [0,1,2]
    • Suggestion 1Mini Bar something Fridge, Matched Positions:[0,1,3]
    • Suggestion 2Mini Bar something else Fridge, Matched Positions:[0,1,4]
    • Suggestion 2 will be penalised as “Fri” match happens farer (4 > 3) from the ideal position 2
  • Earlier the mis-position happened, stronger the penalty for the score to pay
    • Query : Mini Bar Fri, Ideal Positions : [0,1,2]
    • Suggestion 1Mini Bar something Fridge, Matched Positions:[0,1,3]
    • Suggestion 2Mini something Bar Fridge, Matched Positions:[0,2,3]
    • Suggestion 2 will be additionally penalised as the first mis-match in positions Bar happens closer to the beginning of the suggestion 

Considering only the discountinue position proved useful :

Query1: Bar some
Query2: some
Suggestion : Mini Bar something Fridge
Query 1 Suggestion Matched Terms positions : [1,2]
Query 2 Suggestion Matched Terms positions : [2]

If we compare the suggestion score for both these queries, it would seem unfair to penalise the first one just because it matches 2 terms ( consecutive) while the second query has just one match ( positioned worst than the first match in query1)

Introducing this advanced positional coefficient calculus helped in improving the overall behavior for the experimental test created.
The results obtained were quite promising :

Query : Mini Bar Fri
100 |Mini Bar Fridge something
100 |Mini Bar Fridge something else
100 |Mini Bar Fridge a a a a a a a a a a a a a a a a a a a a a a
26 |Mini Bar something Fridge
22 |Mini Bar something else Fridge
17 |Mini something Bar Fridge
8 |something Mini Bar Fridge
7 |something else Mini Bar Fridge

There is still a tie for the exact prefix matches, but let’s see if we can finalise that improvement as well .

Token Count Coefficient

Let’s focus on the first three ranking suggestions we just saw :

Query : Mini Bar Fri
100 |Mini Bar Fridge something
100 |Mini Bar Fridge something else
100 |Mini Bar Fridge a a a a a a a a a a a a a a a a a a a a a a

Intuitively we want this order to break the ties.
Closer the number of matched terms with the total number of terms for the suggestion, the better.
Ideally we want our top scoring suggestion to just have the matched terms if possible.
We also don’t want to bring strong inconsistencies for the other suggestions, we should ideally only affect the ties.
This is achievable calculating an additional coefficient, dependant on the term counts :
Token Count Coefficient = matched terms count / total terms count

Then we can scale this value accordingly :
90% of the final score will derive from the positional coefficient
10% of the final score will derive from the token count coefficient

Query : Mini Bar Fri
90 * 1.0 + 10*3/4 = 97|Mini Bar Fridge something
90 * 1.0 + 10*3/5 = 96|Mini Bar Fridge something else
90 * 1.0 + 10*3/25 = 91|Mini Bar Fridge a a a a a a a a a a a a a a a a a a a a a a

It will require some additional tuning but the overall idea should bring a better ranking function to the BlendedInfix when multiple terms matches are involved!
If you have any suggestion, feel free to leave a comment below!
Code is available in the Github Pull Request attached to the Lucene Jira issue[4]

[1] Solr Autocomplete
[2] Blended Infix Suggester Solr Wiki
[3] LUCENE-8343
[4] LUCENE-8347

Solr : " You complete me! " : The Apache Solr Suggester

This blog post is about the Apache Solr Autocomplete feature.
It is clear that the current documentation available on the wiki is not enough to fully understand the Solr Suggester : this blog post will describe all the available implementations with examples and tricks and tips.

Introduction

If there’s one thing that months of Solr-user mailing list have taught me is that the autocomplete feature in a Search Engine is vital and around Apache Solr Autocomplete there’s as much hype as confusion.

In this blog I am going to try to clarify as much as possible all the kind of Suggesters that can be used in Solr, exploring in details how they work and showing some real world example.

It’s not scope of this blog post to explore in details the configurations.

Please use the official wiki [1] and this really interesting blog post [2] to integrate this resource.

Let’s start with the definition of the Apache Solr Suggester component.

The Apache Solr Suggester

From the official Solr wiki [1]:
” The SuggestComponent in Solr provides users with automatic suggestions for query terms. You can use this to implement a powerful auto-suggest feature in your search application.
This approach utilizes Lucene’s Suggester implementation and supports all of the lookup implementations available in Lucene.
The main features of this Suggester are:
  • Lookup implementation pluggability
  • Term dictionary pluggability, giving you the flexibility to choose the dictionary implementation
  • Distributed support “

For the details of the configuration parameter I suggest you the official wiki as a reference.

Our focus will be the practical use of the different Lookup Implementation , with clear examples.

Term Dictionary

The Term Dictionary defines the way the terms (source for the suggestions) are retrieved for the Solr autocomplete.
There are different ways of retrieving the terms, we are going to focus on the DocumentDictionary ( the most common and simple to use).
For details about the other Dictionaries implementation please refer to the official documentation as usual.

The DocumentDictionary uses the Lucene Index to provide the list of possible suggestions, and specifically a field is set to be the source for these terms.

Suggester Building

Building a suggester is the process of :
  • retrieving the terms (source for the suggestions) from the dictionary
  • build the data structures that the Suggester requires for the lookup at query time
  • Store the data structures in memory/disk

The produced data structure will be stored in memory in first place.

It is suggested to additionally store on disk the built data structures, in this way it will available without rebuilding, when it is not in memory anymore.

For example when you start up Solr, the data will be loaded from disk to the memory without any rebuilding to be necessary.

This parameter is:

storeDir” for the FuzzyLookup

indexPath” for theAnalyzingInfixLookup

The built data structures will be later used by the suggester lookup strategy, at query time.
In details, for the DocumentDictionary during the building process, for ALL the documents in the index :
  • the stored content of the configured field is read from the disk ( stored=”true” is required for the field to have the Suggester working)
  • the compressed content is decompressed ( remember that Solr stores the plain content of a field applying a compression algorithm [3] )
  • the suggester data structure is built
We must be really careful here to this sentence :
“for ALL the documents” -> no delta dictionary building is happening
So extra care every time you decide to build the Suggester !
Two suggester configuration are strictly related to this observation :
Parameter Description
buildOnCommit or buildOnOptimize If true then the lookup data structure will be rebuilt after each soft-commit. If false, the default, then the lookup data will be built only when requested by query parameter suggest.build=true.

Because of the previous observation is quite easy to understand that the buildOnCommit is highly discouraged.

buildOnStartup If true then the lookup data structure will be built when Solr starts or when the core is reloaded. If this parameter is not specified, the suggester will check if the lookup data structure is present on disk and build it if not found.

Again, is highly discouraged to set this to true, or our Solr cores could take really long time to start up.

A good consideration at this point would be to introduce a delta approach in the dictionary building.

Could be a good improvement, making more sense out of the “buildOnCommit” feature.

I will follow up verifying the technical feasibility of this solution.

Now let’s step to the description of the various lookup implementations with related examples.Note: when using the field type “text_en” we refer to a simple English analyser with soft stemming and stop filter enabled.

The simple corpus of document for the examples will be the following :

[
      {
        "id":"44",
        "title":"Video gaming: the history"},
      {
        "id":"11",
        "title":"Video games are an economic business"},
      {
        "id":"55",
        "title":"The new generation of PC and Console Video games"},
      {
        "id":"33",
        "title":"Video games: multiplayer gaming"}]

And a simple synonym mapping : multiplayer, online

AnalyzingLookupFactory


<lst name="suggester">

  <str name=”name”>AnalyzingSuggester</str>

  <str name=”lookupImpl”>AnalyzingLookupFactory</str>

  <str name=”dictionaryImpl”>DocumentDictionaryFactory</str>

  <str name=”field”>title</str>

  <str name=”weightField”>price</str>

  <str name=”suggestAnalyzerFieldType”>text_en</str>

</lst>




 

Description
Data Structure FST
Building For each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType.

The tokens produced are added to the Index FST.

Lookup strategy The query is analysed,  the tokens produced are added to the query FST.

An intersection happens between the Index FST and the query FST.

The suggestions are identified starting at the beginning of the field content.

Suggestions returned The entire content of the field .

This suggester is quite powerful as it allows to provide suggestions at the beginning of a field content, taking advantage of the analysis chain provided with the field.

It will be possible in this way to provide suggestions considering synonyms, stop words, stemming and any other token filter used in the analysis.Let’s see some example:

Query to autocomplete Suggestions Explanation
“Video gam”
  • Video gaming: the history”
  • Video games are an economic business”
  • Video game: multiplayer gaming”
The suggestions coming are simply the result of the prefix match. No surprises so far.
“Video Games”
  • Video gaming: the history”
  • Video games are an economic business”
  • Video game: multiplayer gaming”
The input query is analysed, and the tokens produced are the following : “video” “game”.

The analysis was applied at building time as well, producing the same stemmed terms for the beginning of the titles.

“video gaming” -> “video” “game”

“video games” -> “video” “game”

So the prefix match applies.


“Video game econ”  

  • Video games are an economic business”
In this case we can see that the stop words were not considered when building the index FST. Note :

position increments MUST NOT be preserved for this example to work, see the configuration details.

“Video games online ga”
  • Video game: multiplayer gaming”
Synonym expansion has happened and the match is returned as online and multiplayer are considered synonyms by the suggester, based on the analysis applied.

FuzzyLookupFactory


<lst name="suggester">

  <str name=”name”>FuzzySuggester</str>

  <str name=”lookupImpl”>FuzzyLookupFactory</str> 

  <str name=”dictionaryImpl”>DocumentDictionaryFactory</str>

  <str name=”field”>title</str>

  <str name=”weightField”>price</str>

  <str name=”suggestAnalyzerFieldType”>text_en</str>

</lst>

Description
Data Structure FST
Building For each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType.

The tokens produced are added to the Index FST.

Lookup strategy The query is analysed,  the tokens produced are then expanded producing for each token all the variations accordingly to the max edit configured for the String distance function configured ( default is Levestein Distance[4]).

The finally produced tokens are added to the query FST keeping the variations.

An intersection happens between the Index FST and the query FST.

The suggestions are identified starting at the beginning of the field content.

Suggestions returned The entire content of the field .

This suggester is quite powerful as it allows to provide suggestions at the beginning of a field content, taking advantage of a fuzzy search on top of the analysis chain provided with the field.

It will be possible in this way to provide suggestions considering synonymsstop words, stemming and any other token filter used in the analysis and support also misspelled terms by the user.

It is an extension of the Analysis lookup.IMPORTANT : Remember the proper order of processing happening at query time :

  • FIRST, the query is analysed, and tokens produced
  • THEN, the tokens are expanded with the inflections based on the Edit distance and distance algorithm configured

Let’s see some example:

Query to autocomplete Suggestions Explanation
“Video gmaes”
  • Video gaming: the history”
  • Video games are an economic business”
  • Video game: multiplayer gaming”
The input query is analysed, and the tokens produced are the following : “video” “gmae”.

Then the FST associated is expanded with new statuses containing the inflections for each token.

For example “game” will be added to the query FST because it has a distance of 1 from the original token.

And the prefix matching is working fine returning the expected suggestions.

Video gmaing
  • Video gaming: the history”
  • Video games are an economic business”
  • Video game: multiplayer gaming”
The input query is analysed, and the tokens produced are the following : “video” “gma”.

Then the FST associated is expanded with new statuses containing the inflections for each token.

For example “gam” will be added to the query FST because it has a distance of 1 from the original token.

So the prefix match applies.


Video gamign
  • No suggestion returned
This can seem odd at first, but it is coherent with the Look up implementation.

The input query is analysed, and the tokens produced are the following : “video” “gamign”.

Then the FST associated is expanded with new statuses containing the inflections for each token.

For example “gaming” will be added to the query FST because it has a distance of 1 from the original token.

But no prefix matching will apply because in the Index FST we have “game”, the stemmed token for “gaming”

AnalyzingInfixLookupFactory


<lst name="suggester">

  <str name=”name”>AnalyzingInfixSuggester</str>

  <str name=”lookupImpl”>AnalyzingInfixLookupFactory</str> 

  <str name=”dictionaryImpl”>DocumentDictionaryFactory</str>

  <str name=”field”>title</str>

  <str name=”weightField”>price</str>

  <str name=”suggestAnalyzerFieldType”>text_en</str>

</lst>



 

Description
Data Structure Auxiliary Lucene Index
Building For each Document, the stored content from the field is analyzed according to the suggestAnalyzerFieldType and then additionally EdgeNgram token filtered.

Finally an auxiliary index is built with those tokens.

Lookup strategy The query is analysed according to the suggestAnalyzerFieldType.

Than a phrase search is triggered against the Auxiliary Lucene index

The suggestions are identified starting at the beginning of each token in the field content.

Suggestions returned The entire content of the field .

This suggester is really common nowadays as it allows to provide suggestions in the middle of a field content, taking advantage of the analysis chain provided with the field.

It will be possible in this way to provide suggestions considering synonymsstop words, stemming and any other token filter used in the analysis and match the suggestion based on internal tokens.Let’s see some example:

Query to autocomplete Suggestions Explanation
“gaming”
  • “Video gaming: the history”
  • “Video games are an economic business”
  • “Video game: multiplayer gaming”
The input query is analysed, and the tokens produced are the following : “game” .

In the Auxiliary Index , for each of the field content we have the EdgeNgram tokens:

“v”,”vi”,”vid”… , “g”,”ga”,”gam”,“game” .

So the match happens and the suggestion are returned

“ga”
  • “Video gaming: the history”
  • “Video games are an economic business”
  • “Video game: multiplayer gaming”
The input query is analysed, and the tokens produced are the following : “ga” .

In the Auxiliary Index , for each of the field content we have the EdgeNgram tokens:

“v”,”vi”,”vid”… , “g”,“ga”,”gam”,”game” .

So the match happens and the suggestion are returned


“game econ”  

  • “Video games are an economic business”
Stop words will not appear in the Auxiliary Index.

Both “game” and “econ” will be, so the match applies.

BlendedInfixLookupFactory

We are not going to describe the details  of this lookup strategy as it’s pretty much the same of the AnalyzingInfix.

The only difference appears scoring the suggestions, to weight prefix matches across the matched documents. The score will be higher if a hit is closer to the start of the suggestion or vice versa.

 

FSTLookupFactory


<lst name="suggester">

  <str name=”name”>FSTSuggester</str>

  <str name=”lookupImpl”>FSTLookupFactory</str> 

  <str name=”dictionaryImpl”>DocumentDictionaryFactory</str>

  <str name=”field”>title</str>

</lst>



Description
Data Structure FST
Building For each Document, the stored content is added to the Index FST.
Lookup strategy The query is added to the query FST.

An intersection happens between the Index FST and the query FST.

The suggestions are identified starting at the beginning of the field content.

Suggestions returned The entire content of the field .

This suggester is quite simple as it allows to provide suggestions at the beginning of a field content, with an exact prefix match.Let’s see some example:

Query to autocomplete Suggestions Explanation
“Video gam”
  • Video gaming: the history”
  • Video games are an economic business”
  • Video game: multiplayer gaming”
The suggestions coming are simply the result of the prefix match. No surprises so far.
“Video Games”
  • No Suggestions
The input query is not analysed,  and no field content in the documents starts with that exact prefix


“video gam”  

  • No Suggestions
The input query is not analysed,  and no field content in the documents starts with that exact prefix
“game”
  • No Suggestions
This lookup strategy works only at the beginning of the field content. So no suggestion is returned.

For the following lookup strategy we are going to use a slightly modified corpus of documents :

[
      {
        "id":"44",
        "title":"Video games: the history"},
      {
        "id":"11",
        "title":"Video games the historical background"},
      {
        "id":"55",
        "title":"Superman, hero of the modern time"},
      {
        "id":"33",
        "title":"the study of the hierarchical faceting"}]

FreeTextLookupFactory

<lst name=”suggester”>

  <str name=”name”>FreeTextSuggester</str>

  <str name=”lookupImpl”>FreeTextLookupFactory</str> 

  <str name=”dictionaryImpl”>DocumentDictionaryFactory</str>

  <str name=”field”>title</str>

  <str name=”ngrams”>3</str>

  <str name=”separator”> </str>

  <str name=”suggestFreeTextAnalyzerFieldType”>text_general</str>

</lst>



Description
Data Structure FST
Building For each Document, the stored content from the field is analyzed according to the suggestFreeTextAnalyzerFieldType.

As a last token filter is added a ShingleFilter with a minShingle=2 and maxShingle=.

The final tokens produced are added to the Index FST.

Lookup strategy The query is analysed according to the suggestFreeTextAnalyzerFieldType.

As a last token filter is added a ShingleFilter with a minShingle=2 and maxShingle=.

Only the latest “ngrams” tokens will be evaluated to produce

Suggestions returned ngram tokens suggestions

This lookup strategy is completely different from the others seen so far, its main difference is that the suggestions are ngram tokens ( and NOT the full content of the field).

We must take extra care in using this suggester as it is quite easily prone to errors, some guidelines :

  • Don’t use an heavy Analyzers, the suggested terms will come from the index, so be sure they are meaningful tokens. A really basic analyser is suggested, stop words and stemming are not 
  • Be sure you use the proper separator(‘ ‘ is suggested), the default will be encoded in “#30;”
  • ngrams parameter will set the last n tokens to be considered from the query

Let’s see some example:

Query to autocomplete Suggestions Explanation
“video g”
  • video gaming”
  • video games”
  • generation”
The input query is analysed, and the tokens produced are the following : “video g” “g” 

The analysis was applied at building time as well, producing 2-3 shingles.

“video g” matches by prefix 2 shingles from the index FST .

“g” matches by prefix 1 shingle from the index FST.

“games the h”  

  • games the history”
  • games the historical”
  • the hierarchical”
  • hero”
The input query is analysed, and the tokens produced are the following : “games the h” “the h””h” 

The analysis was applied at building time as well, producing 2-3 shingles.

“games the h” matches by prefix 2 shingles from the index FST .

“the h” matches by prefix 1 shingle from the index FST.

“h” matches by prefix 1 shingle from the index FST.

[1] Suggester Solr wiki

[2] Solr suggester

[3] Lucene Storing Compression

[4] Levenstein Distance