Difference between revisions of "Theoretical Aspects of Lexical Analysis/Exercise 10"

From Wiki**3

< Theoretical Aspects of Lexical Analysis
 
Line 7: Line 7:
 
The following is the result of applying Thompson's algorithm. State '''8''' recognizes the first expression (token '''T1'''); state '''13''' recognizes token '''T2'''; and state '''17''' recognizes token '''T3'''.
 
The following is the result of applying Thompson's algorithm. State '''8''' recognizes the first expression (token '''T1'''); state '''13''' recognizes token '''T2'''; and state '''17''' recognizes token '''T3'''.
  
<graph>
+
<dot-hack>
 
digraph nfa {
 
digraph nfa {
 
     { node [shape=circle style=invis] s }
 
     { node [shape=circle style=invis] s }
Line 44: Line 44:
 
   fontsize=10
 
   fontsize=10
 
}
 
}
</graph>
+
</dot-hack>
  
 
== DFA ==
 
== DFA ==
Line 126: Line 126:
 
Graphically, the DFA is represented as follows:
 
Graphically, the DFA is represented as follows:
  
<graph>
+
<dot-hack>
 
digraph dfa {
 
digraph dfa {
 
     { node [shape=circle style=invis] s }
 
     { node [shape=circle style=invis] s }
Line 142: Line 142:
 
   fontsize=10
 
   fontsize=10
 
}
 
}
</graph>
+
</dot-hack>
  
 
The minimization tree is as follows. Note that before considering transition behavior, states are split according to the token they recognize.
 
The minimization tree is as follows. Note that before considering transition behavior, states are split according to the token they recognize.
  
<graph>
+
<dot-hack>
 
digraph mintree {  
 
digraph mintree {  
 
   node [shape=none,fixedsize=true,width=0.3,fontsize=10]
 
   node [shape=none,fixedsize=true,width=0.3,fontsize=10]
Line 154: Line 154:
 
   "{0, 1, 2, 3, 4} " -> "{3}" [label="  T2",fontsize=10]
 
   "{0, 1, 2, 3, 4} " -> "{3}" [label="  T2",fontsize=10]
 
   "{0, 1, 2, 3, 4} " -> "{4}" [label="  T3",fontsize=10]
 
   "{0, 1, 2, 3, 4} " -> "{4}" [label="  T3",fontsize=10]
   "{0, 1, 2}" -> "{0, 1}" //[label="  a",fontsize=10]
+
   "{0, 1, 2}" -> "{0, 1}"  
 
   "{0, 1, 2}" -> "{2}" [label="  a",fontsize=10]
 
   "{0, 1, 2}" -> "{2}" [label="  a",fontsize=10]
 
   "{0, 1}" -> "{0}"
 
   "{0, 1}" -> "{0}"
 
   "{0, 1}" -> "{1}"  [label="  b",fontsize=10]
 
   "{0, 1}" -> "{1}"  [label="  b",fontsize=10]
 
   fontsize=10
 
   fontsize=10
  //label="Minimization tree"
 
 
}
 
}
</graph>
+
</dot-hack>
  
 
The tree expansion for non-splitting sets has been omitted for simplicity ("a" transitions for super-state {0, 1}).
 
The tree expansion for non-splitting sets has been omitted for simplicity ("a" transitions for super-state {0, 1}).

Latest revision as of 12:18, 12 February 2019

Compute the non-deterministic finite automaton (NFA) by using Thompson's algorithm. Compute the minimal deterministic finite automaton (DFA).
The alphabet is Σ = { a, b }. Indicate the number of processing steps for the given input string.

  • G = { a*|b, ba*, b* }, input string = aababb

NFA

The following is the result of applying Thompson's algorithm. State 8 recognizes the first expression (token T1); state 13 recognizes token T2; and state 17 recognizes token T3.

DFA

Determination table for the above NFA:

In α∈Σ move(In, α) ε-closure(move(In, α)) In+1 = ε-closure(move(In, α))
- - 0 0, 1, 2, 3, 5, 6, 8, 9, 14, 15, 17 0 (T1)
0 a 4 3, 4, 5, 8 1 (T1)
0 b 7, 10, 16 7, 8, 10, 11, 13, 15, 16, 17 2 (T1)
1 a 4 3, 4, 5, 8 1 (T1)
1 b - - -
2 a 12 11, 12, 13 3 (T2)
2 b 16 15, 16, 17 4 (T3)
3 a 12 11, 12, 13 3 (T2)
3 b - - -
4 a - - -
4 b 16 15, 16, 17 4 (T3)

Graphically, the DFA is represented as follows:

The minimization tree is as follows. Note that before considering transition behavior, states are split according to the token they recognize.

The tree expansion for non-splitting sets has been omitted for simplicity ("a" transitions for super-state {0, 1}).

Given the minimization tree, the DFA is already minimal.

Input Analysis

In Input In+1 / Token
0 aababb$ 1
1 ababb$ 1
1 babb$ T1 (aa)
0 babb$ 2
2 abb$ 3
3 bb$ T2 (ba)
0 bb$ 2
2 b$ 4
4 $ T3 (bb)

The input string aababb is, after 9 steps, split into three tokens: T1 (corresponding to lexeme aa), T2 (ba), and T3 (bb).