Theoretical Aspects of Lexical Analysis

From Wiki**3

Revision as of 22:00, 15 March 2008 by Root (talk | contribs) (Building the NFA: Thompson's Algorithm)

Lexical analysis, the first step in the compilation process, splits the input data into segments and classifies them. Each segment of the input (a lexeme) will be assigned a label (the token).

In this case, we will be using regular expressions for recognizing portions of the input text.

Regular Expressions

Regular expressions are defined considering a finite alphabet Σ = { a, b, ..., c } and the empty string ε:

The languages (sets of strings) for each of these entities are:

  • {ε}, for ε
  • {a}, for an entry a in Σ

The following primitive constructors are defined:

  • concatenation
  • alternative
  • Kleene-star (*)

Extensions (derived from the above):

  • Transitive closure (+) - a+ ("one or more 'a'")
  • Optionality (?) - a? ("zero or one 'a'")
  • Character classes - [a-z] ("all chars in the 'a-z' range" - only one character is matched)

Recognizing/Matching Regular Expressions: Thompson's Algorithm

Since we are going to use sets of regular expressions for recognizing input strings, we need a way of implementing that functionality. The recognition process can be efficiently carried out by finite state automata that either accept of reject a given string.

Ken Thompson, the creator of the B language (one of the predecessors of C) and one of the creators of the UNIX operating system, devised the algorithm that carries his name and describes how to build an acceptor for a given regular expression.

Created for Thompson's implementation of the grep UNIX command, the algorithm creates an NFA from a regular expression specification that can then be converted into a DFA. It is this DFA that after minimization yields an automaton that is an acceptor for the original expression.

The following sections cover the algorithm's construction primitives and how to recognize a simple expression. Lexical analysis such as performed by flex requires several expressions to be watched for, each one corresponding to a token. Such automatons feature multiple final states, one or more for each recognized expression.

Building the NFA: Thompson's Algorithm

Thompson's algorithm is based on a few primitives, as show in the following table:

Example Diagram Meaning
Thompson-epsilon.png Empty expression (in the following diagrams, empty expressions will be represented by unlabeled edges).
Thompson-a.png One occurrence of an expression.
Thompson-a-star.png Zero or more occurrences of an expression: this case may be generalized for more complex expression. In this case, the complex expression will simply take the place of arc in the diagram.
Thompson-ab.png Concatenation of two or more expressions: the first expression's final state coincides with the second's. This case, like the previous one, may be generalized to describe more complex concatenations.
Thompson-a-or-b.png Alternative expressions: the to initial states and the final states of each expression are connected to two new states. Both expressions may be replaced by more general cases.

Complex expressions are built from these primitives. The following diagram corresponds to the expression a(a|b)∗|c (note how the Kleene-star operator affects an alternative group):

Thompson-aabc.png

Building DFAs from NFAs

DFA Minimization

Input Processing

Recognizing Multiple Expressions

Example 1: Ambiguous Expressions

Example 2: Backtracking