Suffix automaton

Deterministic finite automaton accepting set of all suffixes of particular string

Suffix automaton
TypeSubstring index
Invented1983
Invented byAnselm Blumer; Janet Blumer; Andrzej Ehrenfeucht; David Haussler; Ross McConnell
Time complexity in big O notation
Operation Average Worst case
Space complexity
Space O ( n ) {\displaystyle O(n)} O ( n ) {\displaystyle O(n)}

In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string S {\displaystyle S} is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.

In terms of automata theory, a suffix automaton is the minimal partial deterministic finite automaton that recognizes the set of suffixes of a given string S = s 1 s 2 s n {\displaystyle S=s_{1}s_{2}\dots s_{n}} . The state graph of a suffix automaton is called a directed acyclic word graph (DAWG), a term that is also sometimes used for any deterministic acyclic finite state automaton.

Suffix automata were introduced in 1983 by a group of scientists from the University of Denver and the University of Colorado Boulder. They suggested a linear time online algorithm for its construction and showed that the suffix automaton of a string S {\displaystyle S} having length at least two characters has at most 2 | S | 1 {\textstyle 2|S|-1} states and at most 3 | S | 4 {\textstyle 3|S|-4} transitions. Further works have shown a close connection between suffix automata and suffix trees, and have outlined several generalizations of suffix automata, such as compacted suffix automaton obtained by compression of nodes with a single outgoing arc.

Suffix automata provide efficient solutions to problems such as substring search and computation of the largest common substring of two and more strings.

History

Anselm Blumer with a drawing of generalized CDAWG for strings ababc and abcab

The concept of suffix automaton was introduced in 1983[1] by a group of scientists from University of Denver and University of Colorado Boulder consisting of Anselm Blumer, Janet Blumer, Andrzej Ehrenfeucht, David Haussler and Ross McConnell, although similar concepts had earlier been studied alongside suffix trees in the works of Peter Weiner,[2] Vaughan Pratt[3] and Anatol Slissenko.[4] In their initial work, Blumer et al. showed a suffix automaton built for the string S {\displaystyle S} of length greater than 1 {\displaystyle 1} has at most 2 | S | 1 {\displaystyle 2|S|-1} states and at most 3 | S | 4 {\displaystyle 3|S|-4} transitions, and suggested a linear algorithm for automaton construction.[5]

In 1983, Mu-Tian Chen and Joel Seiferas independently showed that Weiner's 1973 suffix-tree construction algorithm[2] while building a suffix tree of the string S {\displaystyle S} constructs a suffix automaton of the reversed string S R {\textstyle S^{R}} as an auxiliary structure.[6] In 1987, Blumer et al. applied the compressing technique used in suffix trees to a suffix automaton and invented the compacted suffix automaton, which is also called the compacted directed acyclic word graph (CDAWG).[7] In 1997, Maxime Crochemore and Renaud Vérin developed a linear algorithm for direct CDAWG construction.[1] In 2001, Shunsuke Inenaga et al. developed an algorithm for construction of CDAWG for a set of words given by a trie.[8]

Definitions

Usually when speaking about suffix automata and related concepts, some notions from formal language theory and automata theory are used, in particular:[9]

  • "Alphabet" is a finite set Σ {\displaystyle \Sigma } that is used to construct words. Its elements are called "characters";
  • "Word" is a finite sequence of characters ω = ω 1 ω 2 ω n {\displaystyle \omega =\omega _{1}\omega _{2}\dots \omega _{n}} . "Length" of the word ω {\displaystyle \omega } is denoted as | ω | = n {\displaystyle |\omega |=n} ;
  • "Formal language" is a set of words over given alphabet;
  • "Language of all words" is denoted as Σ {\displaystyle \Sigma ^{*}} (where the "*" character stands for Kleene star), "empty word" (the word of zero length) is denoted by the character ε {\displaystyle \varepsilon } ;
  • "Concatenation of words" α = α 1 α 2 α n {\displaystyle \alpha =\alpha _{1}\alpha _{2}\dots \alpha _{n}} and β = β 1 β 2 β m {\displaystyle \beta =\beta _{1}\beta _{2}\dots \beta _{m}} is denoted as α β {\displaystyle \alpha \cdot \beta } or α β {\displaystyle \alpha \beta } and corresponds to the word obtained by writing β {\displaystyle \beta } to the right of α {\displaystyle \alpha } , that is, α β = α 1 α 2 α n β 1 β 2 β m {\displaystyle \alpha \beta =\alpha _{1}\alpha _{2}\dots \alpha _{n}\beta _{1}\beta _{2}\dots \beta _{m}} ;
  • "Concatenation of languages" A {\displaystyle A} and B {\displaystyle B} is denoted as A B {\displaystyle A\cdot B} or A B {\displaystyle AB} and corresponds to the set of pairwise concatenations A B = { α β : α A , β B } {\displaystyle AB=\{\alpha \beta :\alpha \in A,\beta \in B\}} ;
  • If the word ω Σ {\displaystyle \omega \in \Sigma ^{*}} may be represented as ω = α γ β {\displaystyle \omega =\alpha \gamma \beta } , where α , β , γ Σ {\displaystyle \alpha ,\beta ,\gamma \in \Sigma ^{*}} , then words α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } are called "prefix", "suffix" and "subword" (substring) of the word ω {\displaystyle \omega } correspondingly;
  • If T = T 1 T n {\displaystyle T=T_{1}\dots T_{n}} and T l T l + 1 T r = S {\displaystyle T_{l}T_{l+1}\dots T_{r}=S} (with 1 l r n {\displaystyle 1\leq l\leq r\leq n} ) then S {\displaystyle S} is said to "occur" in T {\displaystyle T} as a subword. Here l {\displaystyle l} and r {\displaystyle r} are called left and right positions of occurrence of S {\displaystyle S} in T {\displaystyle T} correspondingly.

Automaton structure

Formally, deterministic finite automaton is determined by 5-tuple A = ( Σ , Q , q 0 , F , δ ) {\displaystyle {\mathcal {A}}=(\Sigma ,Q,q_{0},F,\delta )} , where:[10]

  • Σ {\displaystyle \Sigma } is an "alphabet" that is used to construct words,
  • Q {\displaystyle Q} is a set of automaton "states",
  • q 0 Q {\displaystyle q_{0}\in Q} is an "initial" state of automaton,
  • F Q {\displaystyle F\subset Q} is a set of "final" states of automaton,
  • δ : Q × Σ Q {\displaystyle \delta :Q\times \Sigma \mapsto Q} is a partial "transition" function of automaton, such that δ ( q , σ ) {\displaystyle \delta (q,\sigma )} for q Q {\displaystyle q\in Q} and σ Σ {\displaystyle \sigma \in \Sigma } is either undefined or defines a transition from q {\displaystyle q} over character σ {\displaystyle \sigma } .

Most commonly, deterministic finite automaton is represented as a directed graph ("diagram") such that:[10]

  • Set of graph vertices corresponds to the state of states Q {\displaystyle Q} ,
  • Graph has a specific marked vertex corresponding to initial state q 0 {\displaystyle q_{0}} ,
  • Graph has several marked vertices corresponding to the set of final states F {\displaystyle F} ,
  • Set of graph arcs corresponds to the set of transitions δ {\displaystyle \delta } ,
  • Specifically, every transition δ ( q 1 , σ ) = q 2 {\textstyle \delta (q_{1},\sigma )=q_{2}} is represented by an arc from q 1 {\displaystyle q_{1}} to q 2 {\displaystyle q_{2}} marked with the character σ {\displaystyle \sigma } . This transition also may be denoted as q 1 σ q 2 {\textstyle q_{1}{\begin{smallmatrix}{\sigma }\\[-5pt]{\longrightarrow }\end{smallmatrix}}q_{2}} .

In terms of its diagram, the automaton recognizes the word ω = ω 1 ω 2 ω m {\displaystyle \omega =\omega _{1}\omega _{2}\dots \omega _{m}} only if there is a path from the initial vertex q 0 {\displaystyle q_{0}} to some final vertex q F {\displaystyle q\in F} such that concatenation of characters on this path forms ω {\displaystyle \omega } . The set of words recognized by an automaton forms a language that is set to be recognized by the automaton. In these terms, the language recognized by a suffix automaton of S {\displaystyle S} is the language of its (possibly empty) suffixes.[9]

Automaton states

"Right context" of the word ω {\displaystyle \omega } with respect to language L {\displaystyle L} is a set [ ω ] R = { α : ω α L } {\displaystyle [\omega ]_{R}=\{\alpha :\omega \alpha \in L\}} that is a set of words α {\displaystyle \alpha } such that their concatenation with ω {\displaystyle \omega } forms a word from L {\displaystyle L} . Right contexts induce a natural equivalence relation [ α ] R = [ β ] R {\displaystyle [\alpha ]_{R}=[\beta ]_{R}} on the set of all words. If language L {\displaystyle L} is recognized by some deterministic finite automaton, there exists unique up to isomorphism automaton that recognizes the same language and has the minimum possible number of states. Such an automaton is called a minimal automaton for the given language L {\displaystyle L} . Myhill–Nerode theorem allows it to define it explicitly in terms of right contexts:[11][12]

Theorem — Minimal automaton recognizing language L {\displaystyle L} over the alphabet Σ {\displaystyle \Sigma } may be explicitly defined in the following way:

  • Alphabet Σ {\displaystyle \Sigma } stays the same,
  • States Q {\displaystyle Q} correspond to right contexts [ ω ] R {\displaystyle [\omega ]_{R}} of all possible words ω Σ {\displaystyle \omega \in \Sigma ^{*}} ,
  • Initial state q 0 {\displaystyle q_{0}} corresponds to the right context of the empty word [ ε ] R {\displaystyle [\varepsilon ]_{R}} ,
  • Final states F {\displaystyle F} correspond to right contexts [ ω ] R {\displaystyle [\omega ]_{R}} of words from ω L {\displaystyle \omega \in L} ,
  • Transitions δ {\displaystyle \delta } are given by [ ω ] R σ [ ω σ ] R {\displaystyle [\omega ]_{R}{\begin{smallmatrix}{\sigma }\\[-5pt]{\longrightarrow }\end{smallmatrix}}[\omega \sigma ]_{R}} , where ω Σ {\displaystyle \omega \in \Sigma ^{*}} and σ Σ {\displaystyle \sigma \in \Sigma } .

In these terms, a "suffix automaton" is the minimal deterministic finite automaton recognizing the language of suffixes of the word S = s 1 s 2 s n {\displaystyle S=s_{1}s_{2}\dots s_{n}} . The right context of the word ω {\displaystyle \omega } with respect to this language consists of words α {\displaystyle \alpha } , such that ω α {\displaystyle \omega \alpha } is a suffix of S {\displaystyle S} . It allows to formulate the following lemma defining a bijection between the right context of the word and the set of right positions of its occurrences in S {\displaystyle S} :[13][14]

Theorem — Let e n d p o s ( ω ) = { r : ω = s l s r } {\displaystyle endpos(\omega )=\{r:\omega =s_{l}\dots s_{r}\}} be the set of right positions of occurrences of ω {\displaystyle \omega } in S {\displaystyle S} .

There is a following bijection between e n d p o s ( ω ) {\displaystyle endpos(\omega )} and [ ω ] R {\displaystyle [\omega ]_{R}} :

  • If x e n d p o s ( ω ) {\displaystyle x\in endpos(\omega )} , then s x + 1 s x + 2 s n [ ω ] R {\displaystyle s_{x+1}s_{x+2}\dots s_{n}\in [\omega ]_{R}} ;
  • If α [ ω ] R {\displaystyle \alpha \in [\omega ]_{R}} , then n | α | e n d p o s ( ω ) {\displaystyle n-\vert \alpha \vert \in endpos(\omega )} .

For example, for the word S = a b a c a b a {\displaystyle S=abacaba} and its subword ω = a b {\displaystyle \omega =ab} , it holds e n d p o s ( a b ) = { 2 , 6 } {\displaystyle endpos(ab)=\{2,6\}} and [ a b ] R = { a , a c a b a } {\displaystyle [ab]_{R}=\{a,acaba\}} . Informally, [ a b ] R {\displaystyle [ab]_{R}} is formed by words that follow occurrences of a b {\displaystyle ab} to the end of S {\displaystyle S} and e n d p o s ( a b ) {\displaystyle endpos(ab)} is formed by right positions of those occurrences. In this example, the element x = 2 e n d p o s ( a b ) {\displaystyle x=2\in endpos(ab)} corresponds with the word s 3 s 4 s 5 s 6 s 7 = a c a b a [ a b ] R {\displaystyle s_{3}s_{4}s_{5}s_{6}s_{7}=acaba\in [ab]_{R}} while the word a [ a b ] R {\displaystyle a\in [ab]_{R}} corresponds with the element 7 | a | = 6 e n d p o s ( a b ) {\displaystyle 7-|a|=6\in endpos(ab)} .

It implies several structure properties of suffix automaton states. Let | α | | β | {\displaystyle |\alpha |\leq |\beta |} , then:[14]

  • If [ α ] R {\displaystyle [\alpha ]_{R}} and [ β ] R {\displaystyle [\beta ]_{R}} have at least one common element x {\displaystyle x} , then e n d p o s ( α ) {\displaystyle endpos(\alpha )} and e n d p o s ( β ) {\displaystyle endpos(\beta )} have a common element as well. It implies α {\displaystyle \alpha } is a suffix of β {\displaystyle \beta } and therefore e n d p o s ( β ) e n d p o s ( α ) {\displaystyle endpos(\beta )\subset endpos(\alpha )} and [ β ] R [ α ] R {\displaystyle [\beta ]_{R}\subset [\alpha ]_{R}} . In aforementioned example, a [ a b ] R [ c a b ] R {\displaystyle a\in [ab]_{R}\cap [cab]_{R}} , so a b {\displaystyle ab} is a suffix of c a b {\displaystyle cab} and thus [ c a b ] R = { a } { a , a c a b a } = [ a b ] R {\displaystyle [cab]_{R}=\{a\}\subset \{a,acaba\}=[ab]_{R}} and e n d p o s ( c a b ) = { 6 } { 2 , 6 } = e n d p o s ( a b ) {\displaystyle endpos(cab)=\{6\}\subset \{2,6\}=endpos(ab)} ;
  • If [ α ] R = [ β ] R {\displaystyle [\alpha ]_{R}=[\beta ]_{R}} , then e n d p o s ( α ) = e n d p o s ( β ) {\displaystyle endpos(\alpha )=endpos(\beta )} , thus α {\displaystyle \alpha } occurs in S {\displaystyle S} only as a suffix of β {\displaystyle \beta } . For example, for α = b {\displaystyle \alpha =b} and β = a b {\displaystyle \beta =ab} it holds that [ b ] R = [ a b ] R = { a , a c a b a } {\displaystyle [b]_{R}=[ab]_{R}=\{a,acaba\}} and e n d p o s ( b ) = e n d p o s ( a b ) = { 2 , 6 } {\displaystyle endpos(b)=endpos(ab)=\{2,6\}} ;
  • If [ α ] R = [ β ] R {\displaystyle [\alpha ]_{R}=[\beta ]_{R}} and γ {\displaystyle \gamma } is a suffix of β {\displaystyle \beta } such that | α | | γ | | β | {\displaystyle |\alpha |\leq |\gamma |\leq |\beta |} , then [ α ] R = [ γ ] R = [ β ] R {\displaystyle [\alpha ]_{R}=[\gamma ]_{R}=[\beta ]_{R}} . In the example above [ c ] R = [ b a c ] R = { a b a } {\displaystyle [c]_{R}=[bac]_{R}=\{aba\}} and it holds for "intermediate" suffix γ = a c {\displaystyle \gamma =ac} that [ a c ] R = { a b a } {\displaystyle [ac]_{R}=\{aba\}} .

Any state q = [ α ] R {\displaystyle q=[\alpha ]_{R}} of the suffix automaton recognizes some continuous chain of nested suffixes of the longest word recognized by this state.[14]

"Left extension" γ {\displaystyle {\overset {\scriptstyle {\leftarrow }}{\gamma }}} of the string γ {\displaystyle \gamma } is the longest string ω {\displaystyle \omega } that has the same right context as γ {\displaystyle \gamma } . Length | γ | {\displaystyle |{\overset {\scriptstyle {\leftarrow }}{\gamma }}|} of the longest string recognized by q = [ γ ] R {\displaystyle q=[\gamma ]_{R}} is denoted by l e n ( q ) {\displaystyle len(q)} . It holds:[15]

Theorem — Left extension of γ {\displaystyle \gamma } may be represented as γ = β γ {\displaystyle {\overleftarrow {\gamma }}=\beta \gamma } , where β {\displaystyle \beta } is the longest word such that any occurrence of γ {\displaystyle \gamma } in S {\displaystyle S} is preceded by β {\displaystyle \beta } .

"Suffix link" l i n k ( q ) {\displaystyle link(q)} of the state q = [ α ] R {\displaystyle q=[\alpha ]_{R}} is the pointer to the state p {\displaystyle p} that contains the largest suffix of α {\displaystyle \alpha } that is not recognized by q {\displaystyle q} .

In this terms it can be said q = [ α ] R {\displaystyle q=[\alpha ]_{R}} recognizes exactly all suffixes of α {\displaystyle {\overset {\scriptstyle {\leftarrow }}{\alpha }}} that is longer than l e n ( l i n k ( q ) ) {\displaystyle len(link(q))} and not longer than l e n ( q ) {\displaystyle len(q)} . It also holds:[15]

Theorem — Suffix links form a tree T ( V , E ) {\displaystyle {\mathcal {T}}(V,E)} that may be defined explicitly in the following way:

  1. Vertices V {\displaystyle V} of the tree correspond to left extensions ω {\displaystyle {\overleftarrow {\omega }}} of all S {\displaystyle S} substrings,
  2. Edges E {\displaystyle E} of the tree connect pairs of vertices ( ω , α ω ) {\displaystyle ({\overleftarrow {\omega }},{\overleftarrow {\alpha \omega }})} , such that α Σ {\displaystyle \alpha \in \Sigma } and ω α ω {\displaystyle {\overleftarrow {\omega }}\neq {\overleftarrow {\alpha \omega }}} .

Connection with suffix trees

Relationship of the suffix trie, suffix tree, DAWG and CDAWG

A "prefix tree" (or "trie") is a rooted directed tree in which arcs are marked by characters in such a way no vertex v {\displaystyle v} of such tree has two out-going arcs marked with the same character. Some vertices in trie are marked as final. Trie is said to recognize a set of words defined by paths from its root to final vertices. In this way prefix trees are a special kind of deterministic finite automata if you perceive its root as an initial vertex.[16] The "suffix trie" of the word S {\displaystyle S} is a prefix tree recognizing a set of its suffixes. "A suffix tree" is a tree obtained from a suffix trie via the compaction procedure, during which consequent edges are merged if the degree of the vertex between them is equal to two.[15]

By its definition, a suffix automaton can be obtained via minimization of the suffix trie. It may be shown that a compacted suffix automaton is obtained by both minimization of the suffix tree (if one assumes each string on the edge of the suffix tree is a solid character from the alphabet) and compaction of the suffix automaton.[17] Besides this connection between the suffix tree and the suffix automaton of the same string there is as well a connection between the suffix automaton of the string S = s 1 s 2 s n {\displaystyle S=s_{1}s_{2}\dots s_{n}} and the suffix tree of the reversed string S R = s n s n 1 s 1 {\displaystyle S^{R}=s_{n}s_{n-1}\dots s_{1}} .[18]

Similarly to right contexts one may introduce "left contexts" [ ω ] L = { β Σ : β ω L } {\displaystyle [\omega ]_{L}=\{\beta \in \Sigma ^{*}:\beta \omega \in L\}} , "right extensions" ω   {\displaystyle {\overset {\scriptstyle {\rightarrow }}{\omega ~}}} corresponding to the longest string having same left context as ω {\displaystyle \omega } and the equivalence relation [ α ] L = [ β ] L {\displaystyle [\alpha ]_{L}=[\beta ]_{L}} . If one considers right extensions with respect to the language L {\displaystyle L} of "prefixes" of the string S {\displaystyle S} it may be obtained:[15]

Theorem — Suffix tree of the string S {\displaystyle S} may be defined explicitly in the following way:

  • Vertices V {\displaystyle V} of the tree correspond to right extensions ω {\displaystyle {\overrightarrow {\omega }}} of all S {\displaystyle S} substrings,
  • Edges E {\displaystyle E} correspond to triplets ( ω , x α , ω x ) {\displaystyle ({\overrightarrow {\omega }},x\alpha ,{\overrightarrow {\omega x}})} such that x Σ {\displaystyle x\in \Sigma } and ω x = ω x α {\displaystyle {\overrightarrow {\omega x}}={\overrightarrow {\omega }}x\alpha } .

Here triplet ( v 1 , ω , v 2 ) E {\displaystyle (v_{1},\omega ,v_{2})\in E} means there is an edge from v 1 {\displaystyle v_{1}} to v 2 {\displaystyle v_{2}} with the string ω {\displaystyle \omega } written on it

, which implies the suffix link tree of the string S {\displaystyle S} and the suffix tree of the string S R {\displaystyle S^{R}} are isomorphic:[18]

Suffix structures of words "abbcbc" and "cbcbba" 
  • Suffix automaton of the word "abbcbc"
    Suffix automaton of the word "abbcbc"
  • Suffix trie, suffix tree and CDAWG of the word "abbcbc"
    Suffix trie, suffix tree and CDAWG of the word "abbcbc"
  • Suffix tree of the word "cbcbba" (Suffix link tree of the word "abbcbc")
    Suffix tree of the word "cbcbba"
    (Suffix link tree of the word "abbcbc")

Similarly to the case of left extensions, the following lemma holds for right extensions:[15]

Theorem — Right extension of the string γ {\displaystyle \gamma } may be represented as γ = γ α {\displaystyle {\overrightarrow {\gamma }}=\gamma \alpha } , where α {\displaystyle \alpha } is the longest word such that every occurrence of γ {\displaystyle \gamma } in S {\displaystyle S} is succeeded by α {\displaystyle \alpha } .

Size

A suffix automaton of the string S {\displaystyle S} of length n > 1 {\displaystyle n>1} has at most 2 n 1 {\displaystyle 2n-1} states and at most 3 n 4 {\displaystyle 3n-4} transitions. These bounds are reached on strings a b b b b = a b n 1 {\displaystyle abb\dots bb=ab^{n-1}} and a b b b c = a b n 2 c {\displaystyle abb\dots bc=ab^{n-2}c} correspondingly.[13] This may be formulated in a stricter way as | δ | | Q | + n 2 {\displaystyle |\delta |\leq |Q|+n-2} where | δ | {\displaystyle |\delta |} and | Q | {\displaystyle |Q|} are the numbers of transitions and states in automaton correspondingly.[14]

Maximal suffix automata
  • Suffix automaton of '"`UNIQ--postMath-000000E2-QINU`"'
    Suffix automaton of a b n 1 {\displaystyle ab^{n-1}}
  • Suffix automaton of '"`UNIQ--postMath-000000E3-QINU`"'
    Suffix automaton of a b n 2 c {\displaystyle ab^{n-2}c}

Construction

Initially the automaton only consists of a single state corresponding to the empty word, then characters of the string are added one by one and the automaton is rebuilt on each step incrementally.[19]

State updates

After a new character is appended to the string, some equivalence classes are altered. Let [ α ] R ω {\displaystyle [\alpha ]_{R_{\omega }}} be the right context of α {\displaystyle \alpha } with respect to the language of ω {\displaystyle \omega } suffixes. Then the transition from [ α ] R ω {\displaystyle [\alpha ]_{R_{\omega }}} to [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} after x {\displaystyle x} is appended to ω {\displaystyle \omega } is defined by lemma:[14]

Theorem — Let α , ω Σ {\displaystyle \alpha ,\omega \in \Sigma ^{*}} be some words over Σ {\displaystyle \Sigma } and x Σ {\displaystyle x\in \Sigma } be some character from this alphabet. Then there is a following correspondence between [ α ] R ω {\displaystyle [\alpha ]_{R_{\omega }}} and [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} :

  • [ α ] R ω x = [ α ] R ω x { ε } {\displaystyle [\alpha ]_{R_{\omega x}}=[\alpha ]_{R_{\omega }}x\cup \{\varepsilon \}} if α {\displaystyle \alpha } is a suffix of ω x {\displaystyle \omega x} ;
  • [ α ] R ω x = [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}=[\alpha ]_{R_{\omega }}x} otherwise.

After adding x {\displaystyle x} to the current word ω {\displaystyle \omega } the right context of α {\displaystyle \alpha } may change significantly only if α {\displaystyle \alpha } is a suffix of ω x {\displaystyle \omega x} . It implies equivalence relation R ω x {\displaystyle \equiv _{R_{\omega x}}} is a refinement of R ω {\displaystyle \equiv _{R_{\omega }}} . In other words, if [ α ] R ω x = [ β ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}=[\beta ]_{R_{\omega x}}} , then [ α ] R ω = [ β ] R ω {\displaystyle [\alpha ]_{R_{\omega }}=[\beta ]_{R_{\omega }}} . After the addition of a new character at most two equivalence classes of R ω {\displaystyle \equiv _{R_{\omega }}} will be split and each of them may split in at most two new classes. First, equivalence class corresponding to empty right context is always split into two equivalence classes, one of them corresponding to ω x {\displaystyle \omega x} itself and having { ε } {\displaystyle \{\varepsilon \}} as a right context. This new equivalence class contains exactly ω x {\displaystyle \omega x} and all its suffixes that did not occur in ω {\displaystyle \omega } , as the right context of such words was empty before and contains only empty word now.[14]

Given the correspondence between states of the suffix automaton and vertices of the suffix tree, it is possible to find out the second state that may possibly split after a new character is appended. The transition from ω {\displaystyle \omega } to ω x {\displaystyle \omega x} corresponds to the transition from ω R {\displaystyle \omega ^{R}} to x ω R {\displaystyle x\omega ^{R}} in the reversed string. In terms of suffix trees it corresponds to the insertion of the new longest suffix x ω R {\displaystyle x\omega ^{R}} into the suffix tree of ω R {\displaystyle \omega ^{R}} . At most two new vertices may be formed after this insertion: one of them corresponding to x ω R {\displaystyle x\omega ^{R}} , while the other one corresponds to its direct ancestor if there was a branching. Returning to suffix automata, it means the first new state recognizes ω x {\displaystyle \omega x} and the second one (if there is a second new state) is its suffix link. It may be stated as a lemma:[14]

Theorem — Let ω Σ {\displaystyle \omega \in \Sigma ^{*}} , x Σ {\displaystyle x\in \Sigma } be some word and character over Σ {\displaystyle \Sigma } . Also let α {\displaystyle \alpha } be the longest suffix of ω x {\displaystyle \omega x} , which occurs in ω {\displaystyle \omega } , and let β = α {\displaystyle \beta ={\overset {\scriptstyle {\leftarrow }}{\alpha }}} . Then for any substrings u , v {\displaystyle u,v} of ω {\displaystyle \omega } it holds:

  • If [ u ] R ω = [ v ] R ω {\displaystyle [u]_{R_{\omega }}=[v]_{R_{\omega }}} and [ u ] R ω [ α ] R ω {\displaystyle [u]_{R_{\omega }}\neq [\alpha ]_{R_{\omega }}} , then [ u ] R ω x = [ v ] R ω x {\displaystyle [u]_{R_{\omega x}}=[v]_{R_{\omega x}}} ;
  • If [ u ] R ω = [ α ] R ω {\displaystyle [u]_{R_{\omega }}=[\alpha ]_{R_{\omega }}} and | u | | α | {\displaystyle \vert u\vert \leq \vert \alpha \vert } , then [ u ] R ω x = [ α ] R ω x {\displaystyle [u]_{R_{\omega x}}=[\alpha ]_{R_{\omega x}}} ;
  • If [ u ] R ω = [ α ] R ω {\displaystyle [u]_{R_{\omega }}=[\alpha ]_{R_{\omega }}} and | u | > | α | {\displaystyle \vert u\vert >\vert \alpha \vert } , then [ u ] R ω x = [ β ] R ω x {\displaystyle [u]_{R_{\omega x}}=[\beta ]_{R_{\omega x}}} .

It implies that if α = β {\displaystyle \alpha =\beta } (for example, when x {\displaystyle x} didn't occur in ω {\displaystyle \omega } at all and α = β = ε {\displaystyle \alpha =\beta =\varepsilon } ), then only the equivalence class corresponding to the empty right context is split.[14]

Besides suffix links it is also needed to define final states of the automaton. It follows from structure properties that all suffixes of a word α {\displaystyle \alpha } recognized by q = [ α ] R {\displaystyle q=[\alpha ]_{R}} are recognized by some vertex on suffix path ( q , l i n k ( q ) , l i n k 2 ( q ) , ) {\displaystyle (q,link(q),link^{2}(q),\dots )} of q {\displaystyle q} . Namely, suffixes with length greater than l e n ( l i n k ( q ) ) {\displaystyle len(link(q))} lie in q {\displaystyle q} , suffixes with length greater than l e n ( l i n k ( l i n k ( q ) ) {\displaystyle len(link(link(q))} but not greater than l e n ( l i n k ( q ) ) {\displaystyle len(link(q))} lie in l i n k ( q ) {\displaystyle link(q)} and so on. Thus if the state recognizing ω {\displaystyle \omega } is denoted by l a s t {\displaystyle last} , then all final states (that is, recognizing suffixes of ω {\displaystyle \omega } ) form up the sequence ( l a s t , l i n k ( l a s t ) , l i n k 2 ( l a s t ) , ) {\displaystyle (last,link(last),link^{2}(last),\dots )} .[19]

Transitions and suffix links updates

After the character x {\displaystyle x} is appended to ω {\displaystyle \omega } possible new states of suffix automaton are [ ω x ] R ω x {\displaystyle [\omega x]_{R_{\omega x}}} and [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} . Suffix link from [ ω x ] R ω x {\displaystyle [\omega x]_{R_{\omega x}}} goes to [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} and from [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} it goes to l i n k ( [ α ] R ω ) {\displaystyle link([\alpha ]_{R_{\omega }})} . Words from [ ω x ] R ω x {\displaystyle [\omega x]_{R_{\omega x}}} occur in ω x {\displaystyle \omega x} only as its suffixes therefore there should be no transitions at all from [ ω x ] R ω x {\displaystyle [\omega x]_{R_{\omega x}}} while transitions to it should go from suffixes of ω {\displaystyle \omega } having length at least α {\displaystyle \alpha } and be marked with the character x {\displaystyle x} . State [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} is formed by subset of [ α ] R ω {\displaystyle [\alpha ]_{R_{\omega }}} , thus transitions from [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} should be same as from [ α ] R ω {\displaystyle [\alpha ]_{R_{\omega }}} . Meanwhile, transitions leading to [ α ] R ω x {\displaystyle [\alpha ]_{R_{\omega x}}} should go from suffixes of ω {\displaystyle \omega } having length less than | α | {\displaystyle |\alpha |} and at least l e n ( l i n k ( [ α ] R ω ) ) {\displaystyle len(link([\alpha ]_{R_{\omega }}))} , as such transitions have led to [ α ] R ω {\displaystyle [\alpha ]_{R_{\omega }}} before and corresponded to seceded part of this state. States corresponding to these suffixes may be determined via traversal of suffix link path for [ ω ] R ω {\displaystyle [\omega ]_{R_{\omega }}} .[19]

Construction of the suffix automaton for the word abbcbc 
∅ → a
After first character is appended, only one state is created in suffix automaton. Similarly, only one leaf is added to the suffix tree.
a → ab
New transitions are drawn from all previous final states as b didn't appear before. For the same reason another leaf is added to the root of the suffix tree.
ab → abb
The state 2 recognizes words ab and b, but only b is the new suffix, therefore this word is separated into state 4. In the suffix tree it corresponds to the split of the edge leading to the vertex 2.
abb → abbc
Character c occurs for the first time, so transitions are drawn from all previous final states. Suffix tree of reverse string has another leaf added to the root.
abbc → abbcb
State 4 consists of the only word b, which is suffix, thus the state is not split. Correspondingly, new leaf is hanged on the vertex 4 in the suffix tree.
abbcb → abbcbc
The state 5 recognizes words abbc, bbc, bc and c, but only last two are suffixes of new word, so they're separated into new state 8. Correspondingly, edge leading to the vertex 5 is split and vertex 8 is put in the middle of the edge.

Construction algorithm

Theoretical results above lead to the following algorithm that takes character x {\displaystyle x} and rebuilds the suffix automaton of ω {\displaystyle \omega } into the suffix automaton of ω x {\displaystyle \omega x} :[19]

  1. The state corresponding to the word ω {\displaystyle \omega } is kept as l a s t {\displaystyle last} ;
  2. After x {\displaystyle x} is appended, previous value of l a s t {\displaystyle last} is stored in the variable p {\displaystyle p} and l a s t {\displaystyle last} itself is reassigned to the new state corresponding to ω x {\displaystyle \omega x} ;
  3. States corresponding to suffixes of ω {\displaystyle \omega } are updated with transitions to l a s t {\displaystyle last} . To do this one should go through p , l i n k ( p ) , l i n k 2 ( p ) , {\displaystyle p,link(p),link^{2}(p),\dots } , until there is a state that already has a transition by x {\displaystyle x} ;
  4. Once the aforementioned loop is over, there are 3 cases:
    1. If none of states on the suffix path had a transition by x {\displaystyle x} , then x {\displaystyle x} never occurred in ω {\displaystyle \omega } before and the suffix link from l a s t {\displaystyle last} should lead to q 0 {\displaystyle q_{0}} ;
    2. If the transition by x {\displaystyle x} is found and leads from the state p {\displaystyle p} to the state q {\displaystyle q} , such that l e n ( p ) + 1 = l e n ( q ) {\displaystyle len(p)+1=len(q)} , then q {\displaystyle q} does not have to be split and it is a suffix link of l a s t {\displaystyle last} ;
    3. If the transition is found but l e n ( q ) > l e n ( p ) + 1 {\displaystyle len(q)>len(p)+1} , then words from q {\displaystyle q} having length at most l e n ( p ) + 1 {\displaystyle len(p)+1} should be segregated into new "clone" state c l {\displaystyle cl} ;
  5. If the previous step was concluded with the creation of c l {\displaystyle cl} , transitions from it and its suffix link should copy those of q {\displaystyle q} , at the same time c l {\displaystyle cl} is assigned to be common suffix link of both q {\displaystyle q} and l a s t {\displaystyle last} ;
  6. Transitions that have led to q {\displaystyle q} before but corresponded to words of the length at most l e n ( p ) + 1 {\displaystyle len(p)+1} are redirected to c l {\displaystyle cl} . To do this, one continues going through the suffix path of p {\displaystyle p} until the state is found such that transition by x {\displaystyle x} from it doesn't lead to q {\displaystyle q} .

The whole procedure is described by the following pseudo-code:[19]

function add_letter(x):
    define p = last
    assign last = new_state()
    assign len(last) = len(p) + 1
    while δ(p, x) is undefined:
        assign δ(p, x) = last, p = link(p)
    define q = δ(p, x)
    if q = last:
        assign link(last) = q0
    else if len(q) = len(p) + 1:
        assign link(last) = q
    else:
        define cl = new_state()
        assign len(cl) = len(p) + 1
        assign δ(cl) = δ(q), link(cl) = link(q)
        assign link(last) = link(q) = cl
        while δ(p, x) = q:
            assign δ(p, x) = cl, p = link(p)

Here q 0 {\displaystyle q_{0}} is the initial state of the automaton and n e w _ s t a t e ( ) {\displaystyle new\_state()} is a function creating new state for it. It is assumed l a s t {\displaystyle last} , l e n {\displaystyle len} , l i n k {\displaystyle link} and δ {\displaystyle \delta } are stored as global variables.[19]

Complexity

Complexity of the algorithm may vary depending on the underlying structure used to store transitions of the automaton. It may be implemented in O ( n log | Σ | ) {\displaystyle O(n\log |\Sigma |)} with O ( n ) {\displaystyle O(n)} memory overhead or in O ( n ) {\displaystyle O(n)} with O ( n | Σ | ) {\displaystyle O(n|\Sigma |)} memory overhead if one assumes that memory allocation is done in O ( 1 ) {\displaystyle O(1)} . To obtain such complexity, one has to use the methods of amortized analysis. The value of l e n ( p ) {\displaystyle len(p)} strictly reduces with each iteration of the cycle while it may only increase by as much as one after the first iteration of the cycle on the next add_letter call. Overall value of l e n ( p ) {\displaystyle len(p)} never exceeds n {\displaystyle n} and it is only increased by one between iterations of appending new letters that suggest total complexity is at most linear as well. The linearity of the second cycle is shown in a similar way.[19]

Generalizations

The suffix automaton is closely related to other suffix structures and substring indices. Given a suffix automaton of a specific string one may construct its suffix tree via compacting and recursive traversal in linear time.[20] Similar transforms are possible in both directions to switch between the suffix automaton of S {\displaystyle S} and the suffix tree of reversed string S R {\displaystyle S^{R}} .[18] Other than this several generalizations were developed to construct an automaton for the set of strings given by trie,[8] compacted suffix automation (CDAWG),[7] to maintain the structure of the automaton on the sliding window,[21] and to construct it in a bidirectional way, supporting the insertion of a characters to both the beginning and the end of the string.[22]

Compacted suffix automaton

As was already mentioned above, a compacted suffix automaton is obtained via both compaction of a regular suffix automaton (by removing states which are non-final and have exactly one out-going arc) and the minimization of a suffix tree. Similarly to the regular suffix automaton, states of compacted suffix automaton may be defined in explicit manner. A two-way extension γ {\displaystyle {\overset {\scriptstyle {\longleftrightarrow }}{\gamma }}} of a word γ {\displaystyle \gamma } is the longest word ω = β γ α {\displaystyle \omega =\beta \gamma \alpha } , such that every occurrence of γ {\displaystyle \gamma } in S {\displaystyle S} is preceded by β {\displaystyle \beta } and succeeded by α {\displaystyle \alpha } . In terms of left and right extensions it means that two-way extension is the left extension of the right extension or, which is equivalent, the right extension of the left extension, that is γ = γ = γ {\textstyle {\overset {\scriptstyle \longleftrightarrow }{\gamma }}={\overset {\scriptstyle \leftarrow }{\overset {\rightarrow }{\gamma }}}={\overset {\rightarrow }{\overset {\scriptstyle \leftarrow }{\gamma }}}} . In terms of two-way extensions compacted automaton is defined as follows:[15]

Theorem — Compacted suffix automaton of the word S {\displaystyle S} is defined by a pair ( V , E ) {\displaystyle (V,E)} , where:

  • V = { ω : ω Σ } {\displaystyle V=\{{\overleftrightarrow {\omega }}:\omega \in \Sigma ^{*}\}} is a set of automaton states;
  • E = { ( ω , x α , ω x ) : x Σ , α Σ , ω x = ω x α } {\displaystyle E=\{({\overleftrightarrow {\omega }},x\alpha ,{\overleftrightarrow {\omega x}}):x\in \Sigma ,\alpha \in \Sigma ^{*},{\overleftrightarrow {\omega x}}={\overleftrightarrow {\omega }}x\alpha \}} is a set of automaton transitions.

Two-way extensions induce an equivalence relation α = β {\textstyle {\overset {\scriptstyle \longleftrightarrow }{\alpha }}={\overset {\scriptstyle \longleftrightarrow }{\beta }}} which defines the set of words recognized by the same state of compacted automaton. This equivalence relation is a transitive closure of the relation defined by ( α = β ) ( α = β ) {\textstyle ({\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}})\vee ({\overset {\scriptstyle {\leftarrow }}{\alpha }}={\overset {\scriptstyle {\leftarrow }}{\beta }})} , which highlights the fact that a compacted automaton may be obtained by both gluing suffix tree vertices equivalent via α = β {\displaystyle {\overset {\scriptstyle {\leftarrow }}{\alpha }}={\overset {\scriptstyle {\leftarrow }}{\beta }}} relation (minimization of the suffix tree) and gluing suffix automaton states equivalent via α = β {\displaystyle {\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}}} relation (compaction of suffix automaton).[23] If words α {\displaystyle \alpha } and β {\displaystyle \beta } have same right extensions, and words β {\displaystyle \beta } and γ {\displaystyle \gamma } have same left extensions, then cumulatively all strings α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } have same two-way extensions. At the same time it may happen that neither left nor right extensions of α {\displaystyle \alpha } and γ {\displaystyle \gamma } coincide. As an example one may take S = β = a b {\displaystyle S=\beta =ab} , α = a {\displaystyle \alpha =a} and γ = b {\displaystyle \gamma =b} , for which left and right extensions are as follows: α = β = a b = β = γ {\displaystyle {\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}}=ab={\overset {\scriptstyle {\leftarrow }}{\beta }}={\overset {\scriptstyle {\leftarrow }}{\gamma }}} , but γ = b {\displaystyle {\overset {\scriptstyle {\rightarrow }}{\gamma \,}}=b} and α = a {\displaystyle {\overset {\scriptstyle {\leftarrow }}{\alpha }}=a} . That being said, while equivalence relations of one-way extensions were formed by some continuous chain of nested prefixes or suffixes, bidirectional extensions equivalence relations are more complex and the only thing one may conclude for sure is that strings with the same two-way extension are substrings of the longest string having the same two-way extension, but it may even happen that they don't have any non-empty substring in common. The total number of equivalence classes for this relation does not exceed n + 1 {\displaystyle n+1} which implies that compacted suffix automaton of the string having length n {\displaystyle n} has at most n + 1 {\displaystyle n+1} states. The amount of transitions in such automaton is at most 2 n 2 {\displaystyle 2n-2} .[15]

Suffix automaton of several strings

Consider a set of words T = { S 1 , S 2 , , S k } {\displaystyle T=\{S_{1},S_{2},\dots ,S_{k}\}} . It is possible to construct a generalization of suffix automaton that would recognize the language formed up by suffixes of all words from the set. Constraints for the number of states and transitions in such automaton would stay the same as for a single-word automaton if you put n = | S 1 | + | S 2 | + + | S k | {\displaystyle n=|S_{1}|+|S_{2}|+\dots +|S_{k}|} .[23] The algorithm is similar to the construction of single-word automaton except instead of l a s t {\displaystyle last} state, function add_letter would work with the state corresponding to the word ω i {\displaystyle \omega _{i}} assuming the transition from the set of words { ω 1 , , ω i , , ω k } {\displaystyle \{\omega _{1},\dots ,\omega _{i},\dots ,\omega _{k}\}} to the set { ω 1 , , ω i x , , ω k } {\displaystyle \{\omega _{1},\dots ,\omega _{i}x,\dots ,\omega _{k}\}} .[24][25]

This idea is further generalized to the case when T {\displaystyle T} is not given explicitly but instead is given by a prefix tree with Q {\displaystyle Q} vertices. Mohri et al. showed such an automaton would have at most 2 Q 2 {\displaystyle 2Q-2} and may be constructed in linear time from its size. At the same time, the number of transitions in such automaton may reach O ( Q | Σ | ) {\displaystyle O(Q|\Sigma |)} , for example for the set of words T = { σ 1 , a σ 1 , a 2 σ 1 , , a n σ 1 , a n σ 2 , , a n σ k } {\displaystyle T=\{\sigma _{1},a\sigma _{1},a^{2}\sigma _{1},\dots ,a^{n}\sigma _{1},a^{n}\sigma _{2},\dots ,a^{n}\sigma _{k}\}} over the alphabet Σ = { a , σ 1 , , σ k } {\displaystyle \Sigma =\{a,\sigma _{1},\dots ,\sigma _{k}\}} the total length of words is equal to O ( n 2 + n k ) {\textstyle O(n^{2}+nk)} , the number of vertices in corresponding suffix trie is equal to O ( n + k ) {\displaystyle O(n+k)} and corresponding suffix automaton is formed of O ( n + k ) {\displaystyle O(n+k)} states and O ( n k ) {\displaystyle O(nk)} transitions. Algorithm suggested by Mohri mainly repeats the generic algorithm for building automaton of several strings but instead of growing words one by one, it traverses the trie in a breadth-first search order and append new characters as it meet them in the traversal, which guarantees amortized linear complexity.[26]

Sliding window

Some compression algorithms, such as LZ77 and RLE may benefit from storing suffix automaton or similar structure not for the whole string but for only last k {\displaystyle k} its characters while the string is updated. This is because compressing data is usually expressively large and using O ( n ) {\displaystyle O(n)} memory is undesirable. In 1985, Janet Blumer developed an algorithm to maintain a suffix automaton on a sliding window of size k {\displaystyle k} in O ( n k ) {\displaystyle O(nk)} worst-case and O ( n log k ) {\displaystyle O(n\log k)} on average, assuming characters are distributed independently and uniformly. She also showed O ( n k ) {\displaystyle O(nk)} complexity cannot be improved: if one considers words construed as a concatenation of several ( a b ) m c ( a b ) m d {\displaystyle (ab)^{m}c(ab)^{m}d} words, where k = 6 m + 2 {\displaystyle k=6m+2} , then the number of states for the window of size k {\displaystyle k} would frequently change with jumps of order m {\displaystyle m} , which renders even theoretical improvement of O ( n k ) {\displaystyle O(nk)} for regular suffix automata impossible.[27]

The same should be true for the suffix tree because its vertices correspond to states of the suffix automaton of the reversed string but this problem may be resolved by not explicitly storing every vertex corresponding to the suffix of the whole string, thus only storing vertices with at least two out-going edges. A variation of McCreight's suffix tree construction algorithm for this task was suggested in 1989 by Edward Fiala and Daniel Greene;[28] several years later a similar result was obtained with the variation of Ukkonen's algorithm by Jesper Larsson.[29][30] The existence of such an algorithm, for compacted suffix automaton that absorbs some properties of both suffix trees and suffix automata, was an open question for a long time until it was discovered by Martin Senft and Tomasz Dvorak in 2008, that it is impossible if the alphabet's size is at least two.[31]

One way to overcome this obstacle is to allow window width to vary a bit while staying O ( k ) {\displaystyle O(k)} . It may be achieved by an approximate algorithm suggested by Inenaga et al. in 2004. The window for which suffix automaton is built in this algorithm is not guaranteed to be of length k {\displaystyle k} but it is guaranteed to be at least k {\displaystyle k} and at most 2 k + 1 {\displaystyle 2k+1} while providing linear overall complexity of the algorithm.[32]

Applications

Suffix automaton of the string S {\displaystyle S} may be used to solve such problems as:[33][34]

  • Counting the number of distinct substrings of S {\displaystyle S} in O ( | S | ) {\displaystyle O(|S|)} on-line,
  • Finding the longest substring of S {\displaystyle S} occurring at least twice in O ( | S | ) {\displaystyle O(|S|)} ,
  • Finding the longest common substring of S {\displaystyle S} and T {\displaystyle T} in O ( | T | ) {\displaystyle O(|T|)} ,
  • Counting the number of occurrences of T {\displaystyle T} in S {\displaystyle S} in O ( | T | ) {\displaystyle O(|T|)} ,
  • Finding all occurrences of T {\displaystyle T} in S {\displaystyle S} in O ( | T | + k ) {\displaystyle O(|T|+k)} , where k {\displaystyle k} is the number of occurrences.

It is assumed here that T {\displaystyle T} is given on the input after suffix automaton of S {\displaystyle S} is constructed.[33]

Suffix automata are also used in data compression,[35] music retrieval[36][37] and matching on genome sequences.[38]

References

  1. ^ a b Crochemore, Vérin (1997), p. 192
  2. ^ a b Weiner (1973)
  3. ^ Pratt (1973)
  4. ^ Slisenko (1983)
  5. ^ Blumer et al. (1984), p. 109
  6. ^ Chen, Seiferas (1985), p. 97
  7. ^ a b Blumer et al. (1987), p. 578
  8. ^ a b Inenaga et al. (2001), p. 1
  9. ^ a b Crochemore, Hancart (1997), pp. 3–6
  10. ^ a b Серебряков и др. (2006), pp. 50–54
  11. ^ Рубцов (2019), pp. 89–94
  12. ^ Hopcroft, Ullman (1979), pp. 65–68
  13. ^ a b Blumer et al. (1984), pp. 111–114
  14. ^ a b c d e f g h Crochemore, Hancart (1997), pp. 27–31
  15. ^ a b c d e f g Inenaga et al. (2005), pp. 159–162
  16. ^ Rubinchik, Shur (2018), pp. 1–2
  17. ^ Inenaga et al. (2005), pp. 156–158
  18. ^ a b c Fujishige et al. (2016), pp. 1–3
  19. ^ a b c d e f g Crochemore, Hancart (1997), pp. 31–36
  20. ^ Паращенко (2007), pp. 19–22
  21. ^ Blumer (1987), p. 451
  22. ^ Inenaga (2003), p. 1
  23. ^ a b Blumer et al. (1987), pp. 585–588
  24. ^ Blumer et al. (1987), pp. 588–589
  25. ^ Blumer et al. (1987), p. 593
  26. ^ Mohri et al. (2009), pp. 3558–3560
  27. ^ Blumer (1987), pp. 461–465
  28. ^ Fiala, Greene (1989), p. 490
  29. ^ Larsson (1996)
  30. ^ Brodnik, Jekovec (2018), p. 1
  31. ^ Senft, Dvořák (2008), p. 109
  32. ^ Inenaga et al. (2004)
  33. ^ a b Crochemore, Hancart (1997), pp. 36–39
  34. ^ Crochemore, Hancart (1997), pp. 39–41
  35. ^ Yamamoto et al. (2014), p. 675
  36. ^ Crochemore et al. (2003), p. 211
  37. ^ Mohri et al. (2009), p. 3553
  38. ^ Faro (2016), p. 145

Bibliography

  • Anselm Cyril Blumer; Janet Blumer; Andrzej Ehrenfeucht; David Haussler; Ross McConnell (1984). Building the minimal DFA for the set of all subwords of a word on-line in linear time. pp. 109–118. doi:10.1007/3-540-13345-3_9. ISBN 978-3-540-13345-2. Wikidata Q90309073. {{cite book}}: |journal= ignored (help)
  • Anselm Cyril Blumer; Janet Blumer; Andrzej Ehrenfeucht; David Haussler; Ross McConnell (July 1987). "Complete inverted files for efficient text retrieval and analysis". Journal of the ACM. 34 (3): 578–595. CiteSeerX 10.1.1.87.6824. doi:10.1145/28869.28873. ISSN 0004-5411. Zbl 1433.68118. Wikidata Q90311855.
  • Janet Blumer (December 1987). "How much is that DAWG in the window? A moving window algorithm for the directed acyclic word graph". Journal of Algorithms. 8 (4): 451–469. doi:10.1016/0196-6774(87)90045-9. ISSN 0196-6774. Zbl 0636.68109. Wikidata Q90327976.
  • Andrej Brodnik; Matevž Jekovec (3 August 2018). "Sliding Suffix Tree". Algorithms. 11 (8): 118. doi:10.3390/A11080118. ISSN 1999-4893. Zbl 1458.68043. Wikidata Q90431196.
  • Mu-Tian Chen; Joel Seiferas (1985). Efficient and Elegant Subword-Tree Construction. pp. 97–107. CiteSeerX 10.1.1.632.4. doi:10.1007/978-3-642-82456-2_7. ISBN 978-3-642-82456-2. Wikidata Q90329833. {{cite book}}: |journal= ignored (help)
  • Maxime Crochemore; Christophe Hancart (1997). Automata for Matching Patterns. Vol. 2. pp. 399–462. CiteSeerX 10.1.1.392.8637. doi:10.1007/978-3-662-07675-0_9. ISBN 978-3-642-59136-5. Wikidata Q90413384. {{cite book}}: |journal= ignored (help)
  • Maxime Crochemore; Renaud Vérin (1997). On compact directed acyclic word graphs. Lecture Notes in Computer Science. pp. 192–211. CiteSeerX 10.1.1.13.6892. doi:10.1007/3-540-63246-8_12. ISBN 978-3-540-69242-3. Wikidata Q90413885. {{cite book}}: |journal= ignored (help)
  • Maxime Crochemore; Costas S. Iliopoulos; Gonzalo Navarro; Yoan J. Pinzon (2003). A Bit-Parallel Suffix Automaton Approach for (δ,γ)-Matching in Music Retrieval. pp. 211–223. CiteSeerX 10.1.1.8.533. doi:10.1007/978-3-540-39984-1_16. ISBN 978-3-540-39984-1. Wikidata Q90414195. {{cite book}}: |journal= ignored (help)
  • Vladimir Serebryakov; Maksim Pavlovich Galochkin; Meran Gabibullaevich Furugian; Dmitriy Ruslanovich Gonchar (2006). Теория и реализация языков программирования: Учебное пособие (PDF) (in Russian). Moscow: MZ Press. ISBN 5-94073-094-9. Wikidata Q90432456.
  • Simone Faro (2016). Evaluation and Improvement of Fast Algorithms for Exact Matching on Genome Sequences. Lecture Notes in Computer Science. pp. 145–157. doi:10.1007/978-3-319-38827-4_12. ISBN 978-3-319-38827-4. Wikidata Q90412338. {{cite book}}: |journal= ignored (help)
  • Edward R. Fiala; Daniel H. Greene (April 1989). "Data compression with finite windows". Communications of the ACM. 32 (4): 490–505. doi:10.1145/63334.63341. ISSN 0001-0782. Wikidata Q90425560.
  • Yuta Fujishige; Yuki Tsujimaru; Shunsuke Inenaga; Hideo Bannai; Masayuki Takeda (2016). Computing DAWGs and Minimal Absent Words in Linear Time for Integer Alphabets (PDF). Vol. 58. pp. 38:1–38:14. doi:10.4230/LIPICS.MFCS.2016.38. ISBN 978-3-95977-016-3. ISSN 1868-8969. Zbl 1398.68703. Wikidata Q90410044. {{cite book}}: |journal= ignored (help)
  • John Edward Hopcroft; Jeffrey David Ullman (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Massachusetts: Addison-Wesley. ISBN 978-81-7808-347-6. OL 9082218M. Wikidata Q90418603.
  • Shunsuke Inenaga (March 2003). "Bidirectional Construction of Suffix Trees" (PDF). Nordic Journal of Computing. 10 (1): 52–67. CiteSeerX 10.1.1.100.8726. ISSN 1236-6064. Wikidata Q90335534.
  • Shunsuke Inenaga; Hiromasa Hoshino; Ayumi Shinohara; Masayuki Takeda; Setsuo Arikawa; Giancarlo Mauri; Giulio Pavesi (March 2005). "On-line construction of compact directed acyclic word graphs". Discrete Applied Mathematics. 146 (2): 156–179. CiteSeerX 10.1.1.1039.6992. doi:10.1016/J.DAM.2004.04.012. ISSN 0166-218X. Zbl 1084.68137. Wikidata Q57518591.
  • Shunsuke Inenaga; Hiromasa Hoshino; Ayumi Shinohara; Masayuki Takeda; Setsuo Arikawa (2001). "Construction of the CDAWG for a trie" (PDF). Prague Stringology Conference. Proceedings: 37–48. CiteSeerX 10.1.1.24.2637. Wikidata Q90341606.
  • Shunsuke Inenaga; Ayumi Shinohara; Masayuki Takeda; Setsuo Arikawa (March 2004). "Compact directed acyclic word graphs for a sliding window". Journal of Discrete Algorithms. 2 (1): 33–51. CiteSeerX 10.1.1.101.358. doi:10.1016/S1570-8667(03)00064-9. ISSN 1570-8667. Zbl 1118.68755. Wikidata Q90345535.
  • N. Jesper Larsson (1996). "Extended application of suffix trees to data compression". Proceedings. Data Compression Conference: 190–199. CiteSeerX 10.1.1.12.8623. doi:10.1109/DCC.1996.488324. ISSN 2375-0383. Wikidata Q90427112.
  • Mehryar Mohri; Pedro Moreno; Eugene Weinstein (September 2009). "General suffix automaton construction algorithm and space bounds". Theoretical Computer Science. 410 (37): 3553–3562. CiteSeerX 10.1.1.157.7443. doi:10.1016/J.TCS.2009.03.034. ISSN 0304-3975. Zbl 1194.68143. Wikidata Q90410808.
  • Дмитрий А. Паращенко (2007), Обработка строк на основе суффиксных автоматов (PDF) (in Russian), Saint Petersburg: ITMO University, Wikidata Q90436837
  • Vaughan Ronald Pratt (1973), Improvements and applications for the Weiner repetition finder, OCLC 726598262, Wikidata Q90300966
  • Александр Александрович Рубцов (2019). Заметки и задачи о регулярных языках и конечных автоматах (PDF) (in Russian). Moscow: Moscow Institute of Physics and Technology. ISBN 978-5-7417-0702-9. Wikidata Q90435728.
  • Mikhail Rubinchik; Arseny M. Shur (February 2018). "Eertree: An efficient data structure for processing palindromes in strings" (PDF). European Journal of Combinatorics. 68: 249–265. arXiv:1506.04862. doi:10.1016/J.EJC.2017.07.021. ISSN 0195-6698. Zbl 1374.68131. Wikidata Q90726647.
  • Martin Senft; Tomáš Dvořák (2008). Sliding CDAWG Perfection. pp. 109–120. doi:10.1007/978-3-540-89097-3_12. ISBN 978-3-540-89097-3. Wikidata Q90426624. {{cite book}}: |journal= ignored (help)
  • Anatoly Olesievich Slisenko (1983). "Detection of periodicities and string-matching in real time". Journal of Mathematical Sciences. 22 (3): 1316–1387. doi:10.1007/BF01084395. ISSN 1072-3374. Zbl 0509.68043. Wikidata Q90305414.
  • Peter Weiner (October 1973). "Linear pattern matching algorithms". Symposium on Foundations of Computer Science: 1–11. CiteSeerX 10.1.1.474.9582. doi:10.1109/SWAT.1973.13. Wikidata Q29541479.
  • Jun'ichi Yamamoto; Tomohiro I; Hideo Bannai; Shunsuke Inenaga; Masayuki Takeda (2014). Faster Compact On-Line Lempel-Ziv Factorization (PDF). Leibniz International Proceedings in Informatics. Vol. 25. pp. 675–686. CiteSeerX 10.1.1.742.6691. doi:10.4230/LIPICS.STACS.2014.675. ISBN 978-3-939897-65-1. ISSN 1868-8969. Zbl 1359.68341. Wikidata Q90348192. {{cite book}}: |journal= ignored (help)

External links

  • Media related to Suffix automaton at Wikimedia Commons
  • Suffix automaton article on E-Maxx Algorithms in English
  • v
  • t
  • e
Strings
String metricString-searching algorithmMultiple string searchingRegular expressionSequence alignmentData structureOther