Hubbry Logo
Regular languageRegular languageMain
Open search
Regular language
Community hub
Regular language
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Regular language
Regular language
from Wikipedia

In theoretical computer science and formal language theory, a regular language (also called a rational language)[1][2] is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science (as opposed to many modern regular expression engines, which are augmented with features that allow the recognition of non-regular languages).

Alternatively, a regular language can be defined as a language recognised by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem[3] (after American mathematician Stephen Cole Kleene). In the Chomsky hierarchy, regular languages are the languages generated by Type-3 grammars.

Formal definition

[edit]

The collection of regular languages over an alphabet Σ is defined recursively as follows:

  • The empty language ∅ is a regular language.
  • For each a ∈ Σ (a belongs to Σ), the singleton language {a} is a regular language.
  • If A is a regular language, A* (Kleene star) is a regular language. Due to this, the empty string language {ε} is also regular.
  • If A and B are regular languages, then AB (union) and AB (concatenation) are regular languages.
  • No other languages over Σ are regular.

See Regular expression § Formal language theory for syntax and semantics of regular expressions.

Examples

[edit]

All finite languages are regular; in particular the empty string language {ε} = ∅* is regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of as, or the language consisting of all strings of the form: several as followed by several bs.

A simple example of a language that is not regular is the set of strings {anbn | n ≥ 0}.[4] Intuitively, it cannot be recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below.

Equivalent formalisms

[edit]

A regular language satisfies the following equivalent properties:

  1. it is the language of a regular expression (by the above definition)
  2. it is the language accepted by a nondeterministic finite automaton (NFA)[note 1][note 2]
  3. it is the language accepted by a deterministic finite automaton (DFA)[note 3][note 4]
  4. it can be generated by a regular grammar[note 5][note 6]
  5. it is the language accepted by an alternating finite automaton
  6. it is the language accepted by a two-way finite automaton
  7. it can be generated by a prefix grammar
  8. it can be accepted by a read-only Turing machine
  9. it can be defined in monadic second-order logic (Büchi–Elgot–Trakhtenbrot theorem)[5]
  10. it is recognized by some finite syntactic monoid M, meaning it is the preimage {w ∈ Σ* | f(w) ∈ S} of a subset S of a finite monoid M under a monoid homomorphism f : Σ*M from the free monoid on its alphabet[note 7]
  11. the number of equivalence classes of its syntactic congruence is finite.[note 8][note 9] (This number equals the number of states of the minimal deterministic finite automaton accepting L.)

Properties 10. and 11. are purely algebraic approaches to define regular languages; a similar set of statements can be formulated for a monoid M ⊆ Σ*. In this case, equivalence over M leads to the concept of a recognizable language.

Some authors use one of the above properties different from "1." as an alternative definition of regular languages.

Some of the equivalences above, particularly those among the first four formalisms, are called Kleene's theorem in textbooks. Precisely which one (or which subset) is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs ("1." and "2." above) "Kleene's theorem".[6] Another textbook calls the equivalence of regular expressions and DFAs ("1." and "3." above) "Kleene's theorem".[7] Two other textbooks first prove the expressive equivalence of NFAs and DFAs ("2." and "3.") and then state "Kleene's theorem" as the equivalence between regular expressions and finite automata (the latter said to describe "recognizable languages").[2][8] A linguistically oriented text first equates regular grammars ("4." above) with DFAs and NFAs, calls the languages generated by (any of) these "regular", after which it introduces regular expressions which it terms to describe "rational languages", and finally states "Kleene's theorem" as the coincidence of regular and rational languages.[9] Other authors simply define "rational expression" and "regular expressions" as synonymous and do the same with "rational languages" and "regular languages".[1][2]

Apparently, the term regular originates from a 1951 technical report where Kleene introduced regular events and explicitly welcomed "any suggestions as to a more descriptive term".[10] Noam Chomsky, in his 1959 seminal article, used the term regular in a different meaning at first (referring to what is called Chomsky normal form today),[11] but noticed that his finite state languages were equivalent to Kleene's regular events.[12]

Closure properties

[edit]

The regular languages are closed under various operations, that is, if the languages K and L are regular, so is the result of the following operations:

  • the set-theoretic Boolean operations: union KL, intersection KL, and complement L, hence also relative complement KL.[13]
  • the regular operations: KL, concatenation , and Kleene star L*.[14]
  • the trio operations: string homomorphism, inverse string homomorphism, and intersection with regular languages. As a consequence they are closed under arbitrary finite state transductions, like quotient K / L with a regular language. Even more, regular languages are closed under quotients with arbitrary languages: If L is regular then L / K is regular for any K.[15]
  • the reverse (or mirror image) LR.[16] Given a nondeterministic finite automaton to recognize L, an automaton for LR can be obtained by reversing all transitions and interchanging starting and finishing states. This may result in multiple starting states; ε-transitions can be used to join them.

Decidability properties

[edit]

Given two deterministic finite automata A and B, it is decidable whether they accept the same language.[17] As a consequence, using the above closure properties, the following problems are also decidable for arbitrarily given deterministic finite automata A and B, with accepted languages LA and LB, respectively:

  • Containment: is LALB ?[note 10]
  • Disjointness: is LALB = {} ?
  • Emptiness: is LA = {} ?
  • Universality: is LA = Σ* ?
  • Membership: given a ∈ Σ*, is aLB ?

For regular expressions, the universality problem is NP-complete already for a singleton alphabet.[18] For larger alphabets, that problem is PSPACE-complete.[19] If regular expressions are extended to allow also a squaring operator, with "A2" denoting the same as "AA", still just regular languages can be described, but the universality problem has an exponential space lower bound,[20][21][22] and is in fact complete for exponential space with respect to polynomial-time reduction.[23]

For a fixed finite alphabet, the theory of the set of all languages – together with strings, membership of a string in a language, and for each character, a function to append the character to a string (and no other operations) – is decidable, and its minimal elementary substructure consists precisely of regular languages. For a binary alphabet, the theory is called S2S.[24]

Complexity results

[edit]

In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULARAC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0.[25] On the other hand, REGULAR does not contain AC0, because the nonregular language of palindromes, or the nonregular language can both be recognized in AC0.[26]

If a language is not regular, it requires a machine with at least Ω(log log n) space to recognize (where n is the input size).[27] In other words, DSPACE(o(log log n)) equals the class of regular languages.[27] In practice, most nonregular problems are studied in a setting with at least logarithmic space, as this is the amount of space required to store a pointer into the input tape.[28]

Location in the Chomsky hierarchy

[edit]
Regular language in classes of Chomsky hierarchy

To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example, the language consisting of all strings having the same number of as as bs is context-free but not regular. To prove that a language is not regular, one often uses the Myhill–Nerode theorem and the pumping lemma. Other approaches include using the closure properties of regular languages[29] or quantifying Kolmogorov complexity.[30]

Important subclasses of regular languages include:

  • Finite languages, those containing only a finite number of words.[31] These are regular languages, as one can create a regular expression that is the union of every word in the language.
  • Star-free languages, those that can be described by a regular expression constructed from the empty symbol, letters, concatenation and all Boolean operators (see algebra of sets) including complementation but not the Kleene star: this class includes all finite languages.[32]

Number of words in a regular language

[edit]

Let denote the number of words of length in . The ordinary generating function for L is the formal power series

The generating function of a language L is a rational function if L is regular.[33] Hence for every regular language the sequence is constant-recursive; that is, there exist an integer constant , complex constants and complex polynomials such that for every the number of words of length in is .[34][35][36][37]

Thus, non-regularity of certain languages can be proved by counting the words of a given length in . Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length in the Dyck language is equal to the Catalan number , which is not of the form , witnessing the non-regularity of the Dyck language. Care must be taken since some of the eigenvalues could have the same magnitude. For example, the number of words of length in the language of all even binary words is not of the form , but the number of words of even or odd length are of this form; the corresponding eigenvalues are . In general, for every regular language there exists a constant such that for all , the number of words of length is asymptotically .[38]

The zeta function of a language L is[33]

The zeta function of a regular language is not in general rational, but that of an arbitrary cyclic language is.[39][40]

Generalizations

[edit]

The notion of a regular language has been generalized to infinite words (see ω-automata) and to trees (see tree automaton).

Rational set generalizes the notion (of regular/rational language) to monoids that are not necessarily free. Likewise, the notion of a recognizable language (by a finite automaton) has namesake as recognizable set over a monoid that is not necessarily free. Howard Straubing notes in relation to these facts that “The term "regular language" is a bit unfortunate. Papers influenced by Eilenberg's monograph[41] often use either the term "recognizable language", which refers to the behavior of automata, or "rational language", which refers to important analogies between regular expressions and rational power series. (In fact, Eilenberg defines rational and recognizable subsets of arbitrary monoids; the two notions do not, in general, coincide.) This terminology, while better motivated, never really caught on, and "regular language" is used almost universally.”[42]

Rational series is another generalization, this time in the context of a formal power series over a semiring. This approach gives rise to weighted rational expressions and weighted automata. In this algebraic context, the regular languages (corresponding to Boolean-weighted rational expressions) are usually called rational languages.[43][44] Also in this context, Kleene's theorem finds a generalization called the Kleene–Schützenberger theorem.

Learning from examples

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In and theory, a regular language is a formal language consisting of a set of strings over a finite that can be recognized by a finite , equivalently expressed using regular expressions, or generated by a . Regular languages were introduced by mathematician in his 1951 technical report, published in 1956 as "Representation of Events in Nerve Nets and Finite Automata," where he introduced the algebraic structure of regular events as a model for neural networks and sequential machines. The class of regular languages occupies the lowest level (Type-3) in the of formal grammars, which classifies languages based on the restrictions on production rules; regular languages are strictly contained within context-free languages and higher classes. They possess several closure properties, meaning that if two or more regular languages are subjected to operations like union, , (repetition), complement, , reversal, or , the resulting language remains regular. A fundamental characterization is provided by the , which states that for any regular language LL, there exists a pumping length pp such that every string ss in LL with length at least pp can be divided as s=xyzs = xyz where xyp|xy| \leq p, y>0|y| > 0, and xyizxy^iz is in LL for all nonnegative integers ii; this lemma is often used to prove that certain languages are non-regular. Regular languages and their equivalents, such as deterministic finite automata (DFAs), nondeterministic finite automata (NFAs), and regular expressions, are foundational in , enabling efficient algorithms for recognition and manipulation since finite automata have finite memory and can process inputs in linear time. In practice, regular expressions power applications in text processing, in compilers (e.g., tokenizing ), pattern matching in search tools, and built-in features of programming languages like , Python, and for string manipulation.

Fundamentals

Formal Definition

In formal language theory, an alphabet Σ\Sigma is a finite nonempty set of symbols, such as Σ={0,1}\Sigma = \{0, 1\}. A string over Σ\Sigma is a finite sequence of symbols from Σ\Sigma, including the empty string ϵ\epsilon, which has length zero and contains no symbols. The set of all finite strings over Σ\Sigma, denoted Σ\Sigma^*, is known as the of Σ\Sigma and forms the universal set for languages over Σ\Sigma. A regular language over an Σ\Sigma is a LΣL \subseteq \Sigma^* consisting of all strings generated by a over Σ\Sigma, or equivalently, all strings accepted by a over Σ\Sigma. This definition captures the simplest class of formal languages in the . The formal syntax of regular expressions over Σ\Sigma is defined inductively as follows:
  • The empty set \emptyset is a regular expression denoting the empty language.
  • Any symbol aΣa \in \Sigma is a regular expression denoting the singleton set {a}\{a\}.
  • The empty string ϵ\epsilon is a regular expression denoting {ϵ}\{\epsilon\}.
  • If R1R_1 and R2R_2 are regular expressions denoting languages L1L_1 and L2L_2, then:
    • (R1+R2)(R_1 + R_2) (or (R1R2)(R_1 \mid R_2)) is a regular expression denoting L1L2L_1 \cup L_2 (union).
    • (R1R2)(R_1 \cdot R_2) is a regular expression denoting L1L2={xyxL1,yL2}L_1 L_2 = \{xy \mid x \in L_1, y \in L_2\} (concatenation).
    • R1R_1^* is a regular expression denoting the Kleene star L1=n=0L1nL_1^* = \bigcup_{n=0}^\infty L_1^n, where L10={ϵ}L_1^0 = \{\epsilon\} and L1n+1=L1nL1L_1^{n+1} = L_1^n L_1 for n0n \geq 0.
      Parentheses are used to indicate precedence, with star having highest precedence, followed by concatenation, then union.
The concept of regular languages was introduced by Stephen C. Kleene in 1956 as "regular events," defined inductively using union, concatenation, and star operations on sequences of input symbols, with equivalence to recognition by finite automata (nerve nets).

Examples

Regular languages can be illustrated through simple sets of strings that can be recognized by finite automata, which maintain a finite amount of memory to track patterns. A classic example is the language L1={w{0,1}wL_1 = \{ w \in \{0,1\}^* \mid w ends with 1}\} , consisting of all binary strings terminating in the symbol 1, such as 01, 101, and 111. This language is regular because a (DFA) with two states suffices: a start state q0q_0 (non-accepting, representing the last symbol was 0 or start) and an accepting state q1q_1 (last symbol was 1), with transitions q00q0q_0 \xrightarrow{0} q_0
Add your contribution
Related Hubs
User Avatar
No comments yet.