DeepImpact solves this issue since it relies on pairwise cross-entropy loss, similarly to the fine-tuning approach presented here. Instead of learning independent term-level scores without taking into account the term co-occurrences in the document, as in DeepCT, or relying on unchanged BM25 scoring, as in DocT5Query, DeepImpact directly optimizes the sum of query term impacts to maximize the score difference between relevant and non-relevant passages for the query.
A few months after DeepImpact was released, other approaches were proposed, such as uniCOIL [4] and SPLADEv2 [5]. These methods, as we are going to see, are more effective than DeepImpact, but their model architecture is also more complex. Both of them not only use document impacts to compute the relevance score, but they leverage query term weights. Furthermore, SPLADEv2 also uses query expansion in addition to document expansion and proposes to train the model with knowledge distillation.
We can group DeepImpact, uniCOIL, and SPLADEv2 in the same category which we named Learned Term Impact (LTI) frameworks.