This is another post about how to implement a basic Search Engine.
Previously, I explained:
- how to produce an inverted index using C# to support queries.
- how to improve the tokenization process, implementing the Porter Stemming Algorithm.
- how to improve the querying process supporting boolean queries.
In this post, I will share how to implement Phrase Queries and how to generate Positional Indexes using C#. The source code is available on my Github.
Phrase Queries?
From the “Introduction to Information Retrieval” book:
Many complex or technical concepts and many organization or product names are multiword compounds or phrases. We would like to be able to pose a query such as “Stanford University” by threating it as a phrase do that a sentece in a document like “The Inventor Stanford Ovshinsky never went to university” is not a match.
To perform this kind of query in my search engine, I implemented a new Query concept: Distance.
[Fact] void DistanceQueriesRespectsMaxDistance() { var documents = new[] { "The Inventor Stanford Ovshinsky never went to university", "The Stanford University is in the heart of Northern California's dynamic Silicon Valley" }; var index = new StringIndexer().CreateIndex(documents); var seacher = new Searcher(index); var query = Query.Distance( DefaultAnalyzer.Instance.AnalyzeOnlyTheFirstToken("Stanford"), DefaultAnalyzer.Instance.AnalyzeOnlyTheFirstToken("University"), maxDistance: 1 ); var results = seacher.Search(query); Assert.Equal(new[] { 1 }, results); }
The idea is to match only documents where two terms appear close to each other in the text.
Positional Indexes?
To be able to support the Distance concept, I improved the Inverted Index data structure.
It is no longer sufficient to store simply lists of documents that contain individual terms (as we did in the previous posts). Now, for each term in the vocabulary, we will need to store, for each document, the positions of the term in the text.
var documents = new[] { // 1 2 3 4 5 6 7 8 9 10 11 12 "One item is just one item. Two items are other two items" };
We have now the Posting class with the DocumentID and the Positions.
using System.Collections.Generic; using System.Linq; namespace Tupi.Indexing { public class InvertedIndex { private readonly ISet _indexedDocuments = new SortedSet(); public IEnumerable<int> IndexedDocuments => _indexedDocuments; private readonly IDictionary<string, List> _data = new Dictionary<string, List>(); internal void Append(string term, int documentId, long position) { if (_data.ContainsKey(term)) { var posting = _data[term].FirstOrDefault(p => p.DocumentId == documentId); if (posting == null) { posting = new Posting(documentId); _data[term].Add(posting); } posting.Positions.Add(position); } else { var posting = new Posting(documentId); posting.Positions.Add(position); var postings = new List {posting}; _data.Add(term, postings); } _indexedDocuments.Add(documentId); } public IEnumerable GetPostingsFor(string term) => !_data.ContainsKey(term) ? Enumerable.Empty() : _data[term]; } public class Posting { public int DocumentId { get; } public IList Positions { get; } = new List(); public Posting(int documentId) { DocumentId = documentId; } public static implicit operator int(Posting entry) => entry.DocumentId; } }
The Inverted Index, including positional information, needs more memory, but we have no choice if we want to provide support for Phrase Queries.
The Distance Query Implementation
To process a phrase query, we will need:
-
- to access the inverted index entries for each distinct term in the query;
- check if both terms are in a single document;
- check that their positions of appearance in the document are compatible with the specified max distance.
Here is my implementation:
public class DistanceQuery : Query { private readonly string _term1; private readonly string _term2; private readonly int _maxDistance; public DistanceQuery(string term1, string term2, int maxDistance) { _term1 = term1; _term2 = term2; _maxDistance = maxDistance; } public override IEnumerable<int> Execute(InvertedIndex index) { var postingsForTerm1 = index.GetPostingsFor(_term1).ToList(); var postingsForTerm2 = index.GetPostingsFor(_term2).ToList(); IEnumerable<Posting> lessFrequent; IEnumerable<Posting> moreFrequent; if (postingsForTerm1.Count() < postingsForTerm2.Count()) { lessFrequent = postingsForTerm1; moreFrequent = postingsForTerm2; } else { lessFrequent = postingsForTerm2; moreFrequent = postingsForTerm1; } using (var p1 = lessFrequent.GetEnumerator()) using (var p2 = moreFrequent.GetEnumerator()) { var hasElementsP1 = p1.MoveNext(); var hasElementsP2 = p2.MoveNext(); while (hasElementsP1 && hasElementsP2) { if (p1.Current.DocumentId == p2.Current.DocumentId) { var pp1 = p1.Current.Positions; var pp2 = p2.Current.Positions; for (var i = 0; i < pp1.Count; i++) { for (var j = 0; j < pp2.Count; j++) { if (Math.Abs(pp1[i] - pp2[j]) <= _maxDistance) { yield return p1.Current.DocumentId; break; } if (pp2[j] > pp1[i]) { break; } } } hasElementsP1 = p1.MoveNext(); hasElementsP2 = p2.MoveNext(); } else if (p1.Current.DocumentId < p2.Current.DocumentId) { hasElementsP1 = p1.MoveNext(); } else { hasElementsP2 = p2.MoveNext(); } } } } }
Until now, we were using .NET Enumerable’s implementation of the Intersect algorithm. This one is based on the algorithm proposed in the book.
Let’s see the next steps…