Text Segmentation (Interpunctio Verborum)
Learn about the text segmentation problem and its solution using backtracking.
Problem statement
Suppose we’re given a string of letters representing text in some foreign language but without any spaces or punctuation, and we want to break this string into its individual constituent words. For example, we might be given the following passage from Cicero’s famous oration in defense of Lucius Licinius Murena in 62 BCE, in the standard scriptio continua of classical Latin:
Segmenting sequences of characters into words
A fluent Latin reader would parse this string (in modern orthography) as Primus dignitas in tam tenui scientia non potest esse; res enim sunt parvae, prope in singulis litteris atque interpunctionibus verborum occupatae. Text segmentation is not only a problem in classical Latin and Greek, but also in several modern languages and scripts, including Balinese, Burmese, Chinese, Japanese, Javanese, Khmer, Lao, Thai, Tibetan, and Vietnamese. Similar problems arise in segmenting unpunctuated English text into sentences, segmenting text into lines for typesetting, speech and handwriting recognition, curve simplification, and several types of time-series analysis. For purposes of illustration, we’ll stick to segmenting sequences of letters in the modern English alphabet into modern English words.
Of course, some strings can be segmented in several different ways; for example, BOTHEARTHANDSATURNSPIN can be decomposed into English words as either BOTH·EARTH·AND·SATURN·SPIN or BOT·HEART·HANDS·AT·URNS·PIN, among many other possibilities. For now, let’s consider an extremely simple segmentation problem: given a string of characters, can it be segmented into English words at all?
To make the problem concrete (and language-agnostic), let’s assume we have access to a subroutine that takes a string w as input and returns if is a “word” or if is not a “word.” For example, if we’re trying to decompose the input string into palindromes, then a “word” is a synonym for “palindrome,” and therefore, IsWord(ROTATOR) = True
but IsWord(PALINDROME) = False
.
Just like the problem, the input structure is a sequence, this time containing letters instead of numbers, so it’s natural to consider a decision process that consumes the input characters in order from left to right. Similarly, the output structure is a sequence of words, so it’s natural to consider a process that produces the output words in order from left to right. Thus, jumping into the middle of the segmentation process, we might imagine the following picture:
Here, the black bar separates our past decisions—splitting the first 17 letters into four words—from the portion of the input string that we haven’t yet processed.
The next stage in our imagined process is to decide where the next word in
the output sequence ends. For this specific example, there are four possibilities for the next output word HE
, HEAR
, HEART
, and HEARTH
. We have no idea which of these choices, if any, is consistent with a complete segmentation of the input string. We could be “smart” at this point and try to figure out which choices are good, but that would require thinking! Instead, let’s try every possibility by brute force and let the Recursion Fairy do all the real work.
- First, tentatively accept
HE
as the next word, and let the Recursion Fairy make the rest of the decisions.
- Then, tentatively accept
HEAR
as the next word, and let the Recursion Fairy make the rest of the decisions.
- Then, tentatively accept
HEART
as the next word, and let the Recursion Fairy make the rest of the decisions.
- Finally, tentatively accept
HEARTH
as the next word, and let the Recursion Fairy make the rest of the decisions.
As long as the Recursion Fairy reports success at least once, we report success. On the other hand, if the Recursion Fairy never reports success—in particular if the set of possible next words is empty—then we report failure.
None of our past decisions affect which choices are available now; all that matters is the suffix of characters that we haven’t yet processed. In particular, several different sequences of past decisions could lead us to the same suffix, but they all leave us with exactly the same set of choices for that suffix.
Thus, we can simplify our picture of the recursive process by discarding everything left of the black bar:
Recursive algorithm
We’re now left with a simple and natural backtracking strategy: Select the first output word and recursively segment the rest of the input string.
To get a complete recursive algorithm, we need a base case. Our recursive strategy breaks down when we reach the end of the input string because there is no next word. Fortunately, the empty string has a unique segmentation into zero words. Putting all the pieces together, we arrive at the following simple recursive algorithm:
Algorithm
...
Create a free account to access the full course.
By signing up, you agree to Educative's Terms of Service and Privacy Policy