Hubbry Logo
Regular expressionRegular expressionMain
Open search
Regular expression
Community hub
Regular expression
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Regular expression
Regular expression
from Wikipedia
Blue highlights show the match results of the regular expression pattern: /r[aeiou]+/g (lowercase r followed by one or more lowercase vowels).

A regular expression (shortened as regex or regexp),[1] sometimes referred to as a rational expression,[2][3] is a sequence of characters that specifies a match pattern in text. Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings, or for input validation. Regular expression techniques are developed in theoretical computer science and formal language theory.

The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized the concept of a regular language. They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax.

Regular expressions are used in search engines, in search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK, and in lexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine",[4][5] and many of these are available for reuse.

History

[edit]
Stephen Cole Kleene, who introduced the concept

Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular events.[6][7] These arose in theoretical computer science, in the subfields of automata theory (models of computation) and the description and classification of formal languages, motivated by Kleene's attempt to describe early artificial neural networks. (Kleene introduced it as an alternative to McCulloch & Pitts's "prehensible", but admitted "We would welcome any suggestions as to a more descriptive term."[8]) Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs.

Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor[9] and lexical analysis in a compiler.[10] Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files.[9][11][12][13] For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation.[14] He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/re/p meaning "Global search for Regular Expression and Print matching lines").[15] Around the same time that Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design.[10]

Many variations of these original forms of regular expressions were used in Unix[13] programs at Bell Labs in the 1970s, including lex, sed, AWK, and expr, and in other programs such as vi, and Emacs (which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992.

In the 1980s, the more complicated regexes arose in Perl, which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation for Tcl called Advanced Regular Expressions.[16] The Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL.[17] Perl later expanded on Spencer's original library to add many new features.[18] Part of the effort in the design of Raku (formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars.[19] The result is a mini-language called Raku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF-style definition of a recursive descent parser via sub-rules.

The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in the glob syntax for filenames, and in the SQL LIKE operator.

Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools including PHP and Apache HTTP Server.[20]

Today, regexes are widely supported in programming languages, text processing programs (particularly lexers), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python, and is built into the syntax of others, including Perl and ECMAScript. In the late 2010s, several companies started to offer hardware, FPGA,[21] GPU[22] implementations of PCRE compatible regex engines that are faster compared to CPU implementations.

Patterns

[edit]

The phrase regular expressions, or regexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex b., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, . is a very general pattern, [a-z] (match all lowercase letters from 'a' to 'z') is less general and b is a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard.

A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base.

The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?.

Translating the Kleene star
(s* means "zero or more of s")

A regex processor translates a regular expression in the above syntax into an internal representation that can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match the regular expression. The picture shows the NFA scheme N(s*) obtained from the regular expression s*, where s denotes a simpler regular expression in turn, which has already been recursively translated to the NFA N(s).

Basic concepts

[edit]

A regular expression, often called a pattern, specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel; we say that this pattern matches each of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example, (Hän|Han|Haen)del also specifies the same set of three strings in this example.

Most formalisms provide the following operations to construct regular expressions.

Boolean "or"
A vertical bar separates alternatives. For example, gray|grey can match "gray" or "grey".
Grouping
Parentheses are used to define the scope and precedence of the operators (among other uses). For example, gray|grey and gr(a|e)y are equivalent patterns which both describe the set of "gray" or "grey".
Quantification
A quantifier after an element (such as a token, character, or group) specifies how many times the preceding element is allowed to repeat. The most common quantifiers are the question mark ?, the asterisk * (derived from the Kleene star), and the plus sign + (Kleene plus).
? The question mark indicates zero or one occurrences of the preceding element. For example, colou?r matches both "color" and "colour".
* The asterisk indicates zero or more occurrences of the preceding element. For example, ab*c matches "ac", "abc", "abbc", "abbbc", and so on.
+ The plus sign indicates one or more occurrences of the preceding element. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac".
{n}[23] The preceding item is matched exactly n times.
{min,}[23] The preceding item is matched min or more times.
{,max}[23] The preceding item is matched up to max times.
{min,max}[23] The preceding item is matched at least min times, but not more than max times.
Wildcard
The wildcard . matches any character. For example,
a.b matches any string that contains an "a", and then any character and then "b".
a.*b matches any string that contains an "a", and then the character "b" at some later point.

These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷.

The precise syntax for regular expressions varies among tools and with context; more detail is given in § Syntax.

Formal language theory

[edit]

Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars. But the language of regular expressions itself, is context-free language.

Formal definition

[edit]

Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory.[24][25] Given a finite alphabet Σ, the following constants are defined as regular expressions:

  • (empty set) ∅ denoting the set ∅.
  • (empty string) ε denoting the set containing only the "empty" string, which has no characters at all.
  • (literal character) a in Σ denoting the set containing only the character a.

Given regular expressions R and S, the following operations over them are defined to produce regular expressions:

  • (concatenation) (RS) denotes the set of strings that can be obtained by concatenating a string accepted by R and a string accepted by S (in that order). For example, let R denote {"ab", "c"} and S denote {"d", "ef"}. Then, (RS) denotes {"abd", "abef", "cd", "cef"}.
  • (alternation) (R|S) denotes the set union of sets described by R and S. For example, if R describes {"ab", "c"} and S describes {"ab", "d", "ef"}, expression (R|S) describes {"ab", "c", "d", "ef"}.
  • (Kleene star) (R*) denotes the smallest superset of the set described by R that contains ε and is closed under string concatenation. This is the set of all strings that can be made by concatenating any finite number (including zero) of strings from the set described by R. For example, if R denotes {"0", "1"}, (R*) denotes the set of all finite binary strings (including the empty string). If R denotes {"ab", "c"}, (R*) denotes {ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "abcab", ...}.

To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example, (ab)c can be written as abc, and a|(b(c*)) can be written as a|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar.

Examples:

  • a|b* denotes {ε, "a", "b", "bb", "bbb", ...}
  • (a|b)* denotes the set of all strings with no symbols other than "a" and "b", including the empty string: {ε, "a", "b", "aa", "ab", "ba", "bb", "aaa", ...}
  • ab*(c|ε) denotes the set of strings starting with "a", then zero or more "b"s and finally optionally a "c": {"a", "ac", "ab", "abc", "abb", "abbc", ...}
  • (0|(1(01*0)*1))* denotes the set of binary numbers that are multiples of 3: { ε, "0", "00", "11", "000", "011", "110", "0000", "0011", "0110", "1001", "1100", "1111", "00000", ...}

The derivative of a regular expression can be defined using the Brzozowski derivative.

Expressive power and compactness

[edit]

The formal definition of regular expressions is minimal on purpose, and avoids defining ? and +—these can be expressed as follows: a+=aa*, and a?=(a|ε). Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause a double exponential blow-up of its length.[26][27][28]

Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages Lk consisting of all strings over the alphabet {a,b} whose kth-from-last letter equals a. On the one hand, a regular expression describing L4 is given by .

Generalizing this pattern to Lk gives the expression:

On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy.[24]

In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file .[29]

Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved by Kleene's algorithm.

Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement regexes. See below for more on this.

Deciding equivalence of regular expressions

[edit]

As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.

It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent).

Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y) and (X Y) denote the same regular language, for all regular expressions X, Y, it is necessary and sufficient to check whether the particular regular expressions (a+b) and (a b) denote the same language over the alphabet Σ={a,b}. More generally, an equation E=F between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.[30][31]

Every regular expression can be written solely in terms of the Kleene star and set unions over finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra, using equational and Horn clause axioms.[32] Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.[33]

Syntax

[edit]

A regex pattern matches a target string. The pattern is composed of a sequence of atoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using ( ) as metacharacters. Metacharacters help form: atoms; quantifiers telling how many atoms (and whether it is a greedy quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities.

Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence, in this case, the backslash \. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters ( ) and { } be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are {}[]()^$.|*+? and \. The usual characters that become metacharacters when escaped are dswDSW and N.

Delimiters

[edit]

When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex re is entered as "re". However, they are often written with slashes as delimiters, as in /re/ for the regex re. This originates in ed, where / is the editor command for searching, and an expression /re/ can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famously g/re/p as in grep ("global regex print"), which is included in most Unix-based operating systems, such as Linux distributions. A similar convention is used in sed, where search and replace is given by s/re/replacement/ and patterns can be joined with a comma to specify a range of lines as in /re1/,/re2/. This notation is particularly well known due to its use in Perl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command s,/,X, will replace a / with an X, using commas as delimiters.

IEEE POSIX Standard

[edit]

The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions),[34] ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE is deprecated,[35] in favor of BRE, as both provide backward compatibility. The subsection below covering the character classes applies to both BRE and ERE.

BRE and ERE work together. ERE adds ?, +, and |, and it removes the need to escape the metacharacters ( ) and { }, which are required in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU grep has the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" for Perl regexes.

Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, ( ) and { } are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching, backreferences, named capture groups, and recursive patterns.

POSIX basic and extended

[edit]

In the POSIX standard, Basic Regular Syntax (BRE) requires that the metacharacters ( ) and { } be designated \(\) and \{\}, whereas Extended Regular Syntax (ERE) does not.

Metacharacter Description
^ Matches the starting position within the string. In line-based tools, it matches the starting position of any line.
. Matches any single character (many applications exclude newlines, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches "abc", etc., but [a.c] matches only "a", ".", or "c".
[ ] A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches "a", "b", or "c". [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z].

The - character is treated as a literal character if it is the last or the first (after the ^, if present) character within the brackets: [abc-], [-abc], [^-abc]. Backslash escapes are not allowed. The ] character can be included in a bracket expression if it is the first (after the ^, if present) character: []abc], [^]abc].

[^ ] Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than "a", "b", or "c". [^a-z] matches any single character that is not a lowercase letter from "a" to "z". Likewise, literal characters and ranges can be mixed.
$ Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line.
( ) Defines a marked subexpression, also called a capturing group, which is essential for extracting the desired part of the text (See also the next entry, \n). BRE mode requires \( \).
\n Matches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is defined in the POSIX standard.[36] Some tools allow referencing more than nine capturing groups. Also known as a back-reference, this feature is supported in BRE mode.
* Matches the preceding element zero or more times. For example, ab*c matches "ac", "abc", "abbbc", etc. [xyz]* matches "", "x", "y", "z", "zx", "zyx", "xyzzy", and so on. (ab)* matches "", "ab", "abab", "ababab", and so on.
{m,n} Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only "aaa", "aaaa", and "aaaaa". This is not found in a few older instances of regexes. BRE mode requires \{m,n\}.

Examples:

  • .at matches any three-character string ending with "at", including "hat", "cat", "bat", "4at", "#at" and " at" (starting with a space).
  • [hc]at matches "hat" and "cat".
  • [^b]at matches all strings matched by .at except "bat".
  • [^hc]at matches all strings matched by .at other than "hat" and "cat".
  • ^[hc]at matches "hat" and "cat", but only at the beginning of the string or line.
  • [hc]at$ matches "hat" and "cat", but only at the end of the string or line.
  • \[.\] matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]", "[b]", "[7]", "[@]", "[]]", and "[ ]" (bracket space bracket).
  • s.* matches s followed by zero or more characters, for example: "s", "saw", "seed", "s3w96.7", and "s6#h%(>>>m n mQ".

According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax).[37]

Metacharacters in POSIX extended

[edit]

The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \) is now ( ) and \{ \} is now { }. Additionally, support is removed for \n backreferences and the following metacharacters are added:

Metacharacter Description
? Matches the preceding element zero or one time. For example, ab?c matches only "ac" or "abc".
+ Matches the preceding element one or more times. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac".
| The choice (also known as alternation or set union) operator matches either the expression before or the expression after the operator. For example, abc|def matches "abc" or "def".

Examples:

  • [hc]?at matches "at", "hat", and "cat".
  • [hc]*at matches "at", "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on.
  • [hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on, but not "at".
  • cat|dog matches "cat" or "dog".

POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.

Character classes

[edit]

The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example, [A-Z] could stand for any uppercase letter in the English alphabet, and \d could mean any digit. Character classes apply to both POSIX levels.

When specifying a range of characters, such as [a-Z] (i.e. lowercase a to uppercase Z), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could be abc...zABC...Z, or aAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table:

Description POSIX Perl/Tcl Vim Java ASCII
ASCII characters \p{ASCII} [\x00-\x7F]
Alphanumeric characters [:alnum:] \p{Alnum} [A-Za-z0-9]
Alphanumeric characters plus "_" \w \w \w [A-Za-z0-9_]
Non-word characters \W \W \W [^A-Za-z0-9_]
Alphabetic characters [:alpha:] \a \p{Alpha} [A-Za-z]
Space and tab [:blank:] \s \p{Blank} [ \t]
Word boundaries \b \< \> \b (?<=\W)(?=\w)|(?<=\w)(?=\W)
Non-word boundaries \B (?<=\W)(?=\W)|(?<=\w)(?=\w)
Control characters [:cntrl:] \p{Cntrl} [\x00-\x1F\x7F]
Digits [:digit:] \d \d \p{Digit} or \d [0-9]
Non-digits \D \D \D [^0-9]
Visible characters [:graph:] \p{Graph} [\x21-\x7E]
Lowercase letters [:lower:] \l \p{Lower} [a-z]
Visible characters and the space character [:print:] \p \p{Print} [\x20-\x7E]
Punctuation characters [:punct:] \p{Punct} [][!"#$%&'()*+,./:;<=>?@\^_`{|}~-]
Whitespace characters [:space:] \s \_s \p{Space} or \s [ \t\r\n\v\f]
Non-whitespace characters \S \S \S [^ \t\r\n\v\f]
Uppercase letters [:upper:] \u \p{Upper} [A-Z]
Hexadecimal digits [:xdigit:] \x \p{XDigit} [A-Fa-f0-9]

POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab] matches the uppercase letters and lowercase "a" and "b".

An additional non-POSIX class understood by some tools is [:word:], which is usually defined as [:alnum:] plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor Vim further distinguishes word and word-head classes (using the notation \w and \h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like \h\w* or [[:alpha:]_][[:alnum:]_]* in POSIX notation.

Note that what the POSIX regex standards call character classes are commonly referred to as POSIX character classes in other regex flavors which support them. With most other regex flavors, the term character class is used to describe what POSIX calls bracket expressions.

Perl and PCRE

[edit]

Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar to Perl's—for example, Java, JavaScript, Julia, Python, Ruby, Qt, Microsoft's .NET Framework, and XML Schema. Some languages and tools such as Boost and PHP support multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.[38]

Lazy matching

[edit]

In Python and some other implementations (e.g. Java), the three common quantifiers (*, +, and ?) are greedy by default because they match as many characters as possible.[39] The regex ".+" (including the double-quotes) applied to the string

"Ganymede," he continued, "is the largest moon in the Solar System."

matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part, "Ganymede,". The aforementioned quantifiers may, however, be made lazy or minimal or reluctant, matching as few characters as possible, by appending a question mark: ".+?" matches only "Ganymede,".[39]

Possessive matching

[edit]

In Java and Python 3.11+,[40] quantifiers may be made possessive by appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed:[41] While the regex ".*" applied to the string

"Ganymede," he continued, "is the largest moon in the Solar System."

matches the entire line, the regex ".*+" does not match at all, because .*+ consumes the entire input, including the final ". Thus, possessive quantifiers are most useful with negated character classes, e.g. "[^"]*+", which matches "Ganymede," when applied to the same string.

Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is (?>group). For example, while ^(wi|w)i$ matches both wi and wii, ^(?>wi|w)i$ only matches wii because the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi".[42]

Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.[41]

IETF I-Regexp

[edit]

IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences.[43]

Patterns for non-regular languages

[edit]

Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds the regular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (backreferences). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.+)\1.

The language of squares is not regular, nor is it context-free, due to the pumping lemma. However, pattern matching with an unbounded number of backreferences, as supported by numerous modern tools, is still context sensitive.[44] The general problem of matching any number of backreferences is NP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used.[45]

However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theory and pattern matching. For this reason, some people have taken to using the term regex, regexp, or simply pattern to describe the latter. Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku:

"Regular expressions" […] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).[19]

Assertions

[edit]
Assertion Lookbehind Lookahead
Positive (?<=pattern) (?=pattern)
Negative (?<!pattern) (?!pattern)
Lookbehind and lookahead assertions
in Perl regular expressions

Other features not found in describing regular languages include assertions. These include the ubiquitous ^ and $, used since at least 1970,[46] as well as some more sophisticated extensions like lookaround that appeared in 1994.[47] Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching.[citation needed] Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.[48]

The look-ahead assertions (?=...) and (?!...) have been attested since at least 1994, starting with Perl 5.[47] The lookbehind assertions (?<=...) and (?<!...) are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005.[49]

Implementations and running times

[edit]

There are at least three different algorithms that decide whether and how a given regex matches a string.

The oldest and fastest relies on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of O(2m), but it can be run on a string of size n in time O(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded.

An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky.[50][51] Modern implementations include the re1-re2-sregex family based on Cox's code.

The third algorithm is to match the pattern against the input string by backtracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like (a|aa)*b that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service (ReDoS).

Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.[52]

Sublinear runtime algorithms have been achieved using Boyer-Moore (BM) based algorithms and related DFA optimization techniques such as the reverse scan.[53] GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wu agrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.[54]

A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of time and space for a haystack of length n and k backreferences in the RegExp.[55] A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.[56]

Unicode

[edit]

In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to use ASCII characters as their token set though regex libraries have supported numerous other character sets. Many modern regex engines offer at least some support for Unicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode.

  • Supported encoding. Some regex libraries expect to work on some particular encoding instead of on abstract Unicode characters. Many of these require the UTF-8 encoding, while others might expect UTF-16, or UTF-32. In contrast, Perl and Java are agnostic on encodings, instead operating on decoded characters internally.
  • Supported Unicode range. Many regex engines support only the Basic Multilingual Plane, that is, the characters which can be encoded with only 16 bits. Currently (as of 2016) only a few regex engines (e.g., Perl's and Java's) can handle the full 21-bit Unicode range.
  • Extending ASCII-oriented constructs to Unicode. For example, in ASCII-based implementations, character ranges of the form [x-y] are valid wherever x and y have code points in the range [0x00,0x7F] and codepoint(x) ≤ codepoint(y). The natural extension of such character ranges to Unicode would simply change the requirement that the endpoints lie in [0x00,0x7F] to the requirement that they lie in [0x0000,0x10FFFF]. However, in practice this is often not the case. Some implementations, such as that of gawk, do not allow character ranges to cross Unicode blocks. A range like [0x61,0x7F] is valid since both endpoints fall within the Basic Latin block, as is [0x0530,0x0560] since both endpoints fall within the Armenian block, but a range like [0x0061,0x0532] is invalid since it includes multiple Unicode blocks. Other engines, such as that of the Vim editor, allow block-crossing but the character values must not be more than 256 apart.[57]
  • Case insensitivity. Some case-insensitivity flags affect only the ASCII characters. Other flags affect all characters. Some engines have two different flags, one for ASCII, the other for Unicode. Exactly which characters belong to the POSIX classes also varies.
  • Cousins of case insensitivity. As ASCII has case distinction, case insensitivity became a logical feature in text searching. Unicode introduced alphabetic scripts without case like Devanagari. For these, case sensitivity is not applicable. For scripts like Chinese, another distinction seems logical: between traditional and simplified. In Arabic scripts, insensitivity to initial, medial, final, and isolated position may be desired. In Japanese, insensitivity between hiragana and katakana is sometimes useful.
  • Normalization. Unicode has combining characters. Like old typewriters, plain base characters (white spaces, punctuation characters, symbols, digits, or letters) can be followed by one or more non-spacing symbols (usually diacritics, like accent marks modifying letters) to form a single printable character; but Unicode also provides a limited set of precomposed characters, i.e. characters that already include one or more combining characters. A sequence of a base character + combining characters should be matched with the identical single precomposed character (only some of these combining sequences can be precomposed into a single Unicode character, but infinitely many other combining sequences are possible in Unicode, and needed for various languages, using one or more combining characters after an initial base character; these combining sequences may include a base character or combining characters partially precomposed, but not necessarily in canonical order and not necessarily using the canonical precompositions). The process of standardizing sequences of a base character + combining characters by decomposing these canonically equivalent sequences, before reordering them into canonical order (and optionally recomposing some combining characters into the leading base character) is called normalization.
  • New control codes. Unicode introduced, among other codes, byte order marks and text direction markers. These codes might have to be dealt with in a special way.
  • Introduction of character classes for Unicode blocks, scripts, and numerous other character properties. Block properties are much less useful than script properties, because a block can have code points from several different scripts, and a script can have code points from several different blocks.[58] In Perl and the java.util.regex library, properties of the form \p{InX} or \p{Block=X} match characters in block X and \P{InX} or \P{Block=X} matches code points not in that block. Similarly, \p{Armenian}, \p{IsArmenian}, or \p{Script=Armenian} matches any character in the Armenian script. In general, \p{X} matches any character with either the binary property X or the general category X. For example, \p{Lu}, \p{Uppercase_Letter}, or \p{GC=Lu} matches any uppercase letter. Binary properties that are not general categories include \p{White_Space}, \p{Alphabetic}, \p{Math}, and \p{Dash}. Examples of non-binary properties are \p{Bidi_Class=Right_to_Left}, \p{Word_Break=A_Letter}, and \p{Numeric_Value=10}.

Language support

[edit]

Most general-purpose programming languages support regex capabilities, either natively or via libraries.

Uses

[edit]

Regexes are useful in a wide variety of text processing tasks, and more generally string processing, where the data need not be textual. Common applications include data validation, data scraping (especially web scraping), data wrangling, simple parsing, the production of syntax highlighting systems, and many other tasks.

Some high-end desktop publishing software has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining a character style that makes text into small caps and then using the regex [A-Z]{4,} to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead.

While regexes would be useful on Internet search engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions include Google Code Search and Exalead. However, Google Code Search was shut down in January 2012.[59]

Examples

[edit]

The specific syntax rules vary depending on the specific implementation, programming language, or library in use. Additionally, the functionality of regex implementations can vary between versions.

Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation. This section provides a basic description of some of the properties of regexes by way of illustration.

The following conventions are used in the examples.[60]

metacharacter(s) ;; the metacharacters column specifies the regex syntax being demonstrated
=~ m//           ;; indicates a regex match operation in Perl
=~ s///          ;; indicates a regex substitution operation in Perl

These regexes are all Perl-like syntax. Standard POSIX regular expressions are different.

Unless otherwise indicated, the following examples conform to the Perl programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, \( \) vs. (), or lack of \d instead of POSIX [:digit:]).

The syntax and conventions used in these examples coincide with that of other programming environments as well.[61]

Meta­character(s) Description Example[62]
. Normally matches any character except a newline.
Within square brackets the dot is literal.
$string1 = "Hello World\n";
if ($string1 =~ m/...../) {
  print "$string1 has length >= 5.\n";
}

Output:

Hello World
 has length >= 5.
( ) Groups a series of pattern elements to a single element.
When you match a pattern within parentheses, you can use any of $1, $2, ... later to refer to the previously matched pattern. Some implementations may use a backslash notation instead, like \1, \2.
$string1 = "Hello World\n";
if ($string1 =~ m/(H..).(o..)/) {
  print "We matched '$1' and '$2'.\n";
}

Output:

We matched 'Hel' and 'o W'.
+ Matches the preceding pattern element one or more times.
$string1 = "Hello World\n";
if ($string1 =~ m/l+/) {
  print "There are one or more consecutive letter \"l\"'s in $string1.\n";
}

Output:

There are one or more consecutive letter "l"'s in Hello World.
? Matches the preceding pattern element zero or one time.
$string1 = "Hello World\n";
if ($string1 =~ m/H.?e/) {
  print "There is an 'H' and a 'e' separated by ";
  print "0-1 characters (e.g., He Hue Hee).\n";
}

Output:

There is an 'H' and a 'e' separated by 0-1 characters (e.g., He Hue Hee).
? Modifies the *, +, ? or {M,N}'d regex that comes before to match as few times as possible.
$string1 = "Hello World\n";
if ($string1 =~ m/(l.+?o)/) {
  print "The non-greedy match with 'l' followed by one or ";
  print "more characters is 'llo' rather than 'llo Wo'.\n";
}

Output:

The non-greedy match with 'l' followed by one or more characters is 'llo' rather than 'llo Wo'.
* Matches the preceding pattern element zero or more times.
$string1 = "Hello World\n";
if ($string1 =~ m/el*o/) {
  print "There is an 'e' followed by zero to many ";
  print "'l' followed by 'o' (e.g., eo, elo, ello, elllo).\n";
}

Output:

There is an 'e' followed by zero to many 'l' followed by 'o' (e.g., eo, elo, ello, elllo).
{M,N} Denotes the minimum M and the maximum N match count.
N can be omitted and M can be 0: {M} matches "exactly" M times; {M,} matches "at least" M times; {0,N} matches "at most" N times.
x* y+ z? is thus equivalent to x{0,} y{1,} z{0,1}.
$string1 = "Hello World\n";
if ($string1 =~ m/l{1,2}/) {
  print "There exists a substring with at least 1 ";
  print "and at most 2 l's in $string1\n";
}

Output:

There exists a substring with at least 1 and at most 2 l's in Hello World
[…] Denotes a set of possible character matches.
$string1 = "Hello World\n";
if ($string1 =~ m/[aeiou]+/) {
  print "$string1 contains one or more vowels.\n";
}

Output:

Hello World
 contains one or more vowels.
| Separates alternate possibilities.
$string1 = "Hello World\n";
if ($string1 =~ m/(Hello|Hi|Pogo)/) {
  print "$string1 contains at least one of Hello, Hi, or Pogo.";
}

Output:

Hello World
 contains at least one of Hello, Hi, or Pogo.
\b Matches a zero-width boundary between a word-class character (see next) and either a non-word class character or an edge; same as

(^\w|\w$|\W\w|\w\W).

$string1 = "Hello World\n";
if ($string1 =~ m/llo\b/) {
  print "There is a word that ends with 'llo'.\n";
}

Output:

There is a word that ends with 'llo'.
\w Matches an alphanumeric character, including "_";
same as [A-Za-z0-9_] in ASCII, and
[\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}]

in Unicode,[58] where the Alphabetic property contains more than Latin letters, and the Decimal_Number property contains more than Arab digits.

$string1 = "Hello World\n";
if ($string1 =~ m/\w/) {
  print "There is at least one alphanumeric ";
  print "character in $string1 (A-Z, a-z, 0-9, _).\n";
}

Output:

There is at least one alphanumeric character in Hello World
 (A-Z, a-z, 0-9, _).
\W Matches a non-alphanumeric character, excluding "_";
same as [^A-Za-z0-9_] in ASCII, and
[^\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}]

in Unicode.

$string1 = "Hello World\n";
if ($string1 =~ m/\W/) {
  print "The space between Hello and ";
  print "World is not alphanumeric.\n";
}

Output:

The space between Hello and World is not alphanumeric.
\s Matches a whitespace character,
which in ASCII are tab, line feed, form feed, carriage return, and space;
in Unicode, also matches no-break spaces, next line, and the variable-width spaces (among others).
$string1 = "Hello World\n";
if ($string1 =~ m/\s.*\s/) {
  print "In $string1 there are TWO whitespace characters, which may";
  print " be separated by other characters.\n";
}

Output:

In Hello World
 there are TWO whitespace characters, which may be separated by other characters.
\S Matches anything but a whitespace.
$string1 = "Hello World\n";
if ($string1 =~ m/\S.*\S/) {
  print "In $string1 there are TWO non-whitespace characters, which";
  print " may be separated by other characters.\n";
}

Output:

In Hello World
 there are TWO non-whitespace characters, which may be separated by other characters.
\d Matches a digit;
same as [0-9] in ASCII;
in Unicode, same as the \p{Digit} or \p{GC=Decimal_Number} property, which itself the same as the \p{Numeric_Type=Decimal} property.
$string1 = "99 bottles of beer on the wall.";
if ($string1 =~ m/(\d+)/) {
  print "$1 is the first number in '$string1'\n";
}

Output:

99 is the first number in '99 bottles of beer on the wall.'
\D Matches a non-digit;
same as [^0-9] in ASCII or \P{Digit} in Unicode.
$string1 = "Hello World\n";
if ($string1 =~ m/\D/) {
  print "At least one character in $string1";
  print " is not a digit.\n";
}

Output:

At least one character in Hello World
 is not a digit.
^ Matches the beginning of a line or string.
$string1 = "Hello World\n";
if ($string1 =~ m/^He/) {
  print "$string1 starts with the characters 'He'.\n";
}

Output:

Hello World
 starts with the characters 'He'.
$ Matches the end of a line or string.
$string1 = "Hello World\n";
if ($string1 =~ m/rld$/) {
  print "$string1 is a line or string ";
  print "that ends with 'rld'.\n";
}

Output:

Hello World
 is a line or string that ends with 'rld'.
\A Matches the beginning of a string (but not an internal line).
$string1 = "Hello\nWorld\n";
if ($string1 =~ m/\AH/) {
  print "$string1 is a string ";
  print "that starts with 'H'.\n";
}

Output:

Hello
World
 is a string that starts with 'H'.
\z Matches the end of a string (but not an internal line).[63]
$string1 = "Hello\nWorld\n";
if ($string1 =~ m/d\n\z/) {
  print "$string1 is a string ";
  print "that ends with 'd\\n'.\n";
}

Output:

Hello
World
 is a string that ends with 'd\n'.
[^…] Matches every character except the ones inside brackets.
$string1 = "Hello World\n";
if ($string1 =~ m/[^abc]/) {
  print "$string1 contains a character other than ";
  print "a, b, and c.\n";
}

Output:

Hello World
 contains a character other than a, b, and c.

Induction

[edit]

Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as the induction of regular languages and is part of the general problem of grammar induction in computational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of strings not in that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (see language identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A regular expression, often abbreviated as regex or regexp, is a concise notation for specifying a search pattern that matches sequences of characters in text strings, enabling operations like searching, validating, and transforming data. Formally, regular expressions describe regular languages, the simplest class in the of formal languages, which can be recognized and generated by finite automata. The concept traces its origins to in the 1950s, when mathematician introduced the notation in his work on modeling neural networks and finite automata, using operators like union, concatenation, and (repetition) to represent sets of event sequences. Kleene's formalism, detailed in his 1956 paper "Representation of Events in Nerve Nets and Finite Automata," established the equivalence between regular expressions, regular grammars, and nondeterministic finite automata, laying the foundation for . This theoretical framework proved influential in compiler design and formal language theory, demonstrating that regular languages are closed under operations like complement and intersection. In practical applications, regular expressions gained prominence in the 1960s through implementations in text editors and Unix utilities. Ken Thompson incorporated them into the QED text editor around 1966, developing an efficient search algorithm based on nondeterministic finite automata that compiles patterns into executable code for fast matching. This algorithm, published in 1968, influenced subsequent tools like the Unix commands grep, sed, and awk, which standardized basic regex syntax for line-oriented processing. Over time, extensions in languages like Perl (with Perl-compatible regular expressions, or PCRE) added features such as lookaheads and backreferences, enhancing expressiveness for complex in , , and , while libraries in Python, , and make them ubiquitous in modern computing.

Fundamentals

Basic Concepts

A regular expression, often abbreviated as regex or regexp, is a sequence of characters that specifies a search , primarily for use in with strings of text, though it can be extended to other sequential data such as binary streams or genomic sequences. The term originates from the work of mathematician , who introduced it in the to describe regular events in the context of formal languages, laying the groundwork for its practical application in text processing and . At its core, a regular expression identifies substrings within a larger text by combining literal characters—which match themselves exactly—with metacharacters that represent broader classes or repetitions, enabling efficient searching and extraction of patterns that would be cumbersome to specify otherwise. For example, literal characters like c, o, and l directly match those letters in sequence, while metacharacters introduce flexibility to handle variations or unknowns in the input. Key building blocks include the dot (.), a wildcard that matches any single character except a ; the (*), which quantifies zero or more occurrences of the preceding element; and the plus sign (+), indicating one or more occurrences of the preceding element. These elements allow users to construct patterns incrementally, starting from simple literals and scaling to more nuanced matches without delving into theory. A common illustrative example is the pattern colou?r, which matches both "color" ( spelling) and "colour" ( spelling) by treating the u as optional through the question mark (?) quantifier for zero or one occurrence—though this builds on the basic quantifiers, it demonstrates the intuitive power of regex for handling minor textual variations.

Historical Development

The theoretical foundations of regular expressions trace back to the work of mathematician , who introduced the concept of regular events in his 1951 research memorandum titled "Representation of Events in Nerve Nets and Finite Automata." This paper, later published in the 1956 volume Automata Studies, formalized regular expressions as a notation for describing sets of strings recognizable by finite automata, drawing from McCulloch-Pitts models to explore event sequences in computational systems. Kleene's notation, including operators for union, concatenation, and closure (now known as the ), provided the algebraic framework that would underpin later theoretical and practical developments in formal language theory. Practical adoption of regular expressions began in the 1960s through the efforts of Ken Thompson at Bell Labs. In 1966, Thompson implemented regular expression support in his version of the QED text editor for the CTSS operating system, marking one of the earliest programmatic uses of the concept for pattern matching in text processing. This implementation compiled expressions into nondeterministic finite automata (NDFAs) for efficient searching, influencing subsequent tools. By 1971, Thompson extended these capabilities to the ed line editor for Unix, where commands like substitute leveraged regular expressions for global search and replace operations. The grep utility, derived from the ed command g/re/p (global regular expression print), further popularized the technology in Unix environments starting around 1973, enabling rapid text filtering across files and solidifying regular expressions as a core Unix tool. Theoretical extensions in the and built on Kleene's work through contributions from researchers like and Ravi Sethi, who explored regular expressions in the context of compiler design and programming language semantics, emphasizing their role in and automaton construction. Meanwhile, integration into Unix utilities accelerated: the sed stream editor, developed from 1973-1974 by Lee E. McMahon at Bell Labs and first publicly released as part of Version 7 Unix in 1979, and awk programming language (1977), developed by , , and Peter Weinberger, incorporated regular expressions for scripting text transformations and data extraction. Standardization efforts in the late 1980s and 1990s addressed portability across systems. The POSIX.2 standard, developed from 1988 to 1992 by the IEEE POSIX working group, defined Basic Regular Expressions (BRE) and Extended Regular Expressions (ERE) to unify syntax in tools like grep, sed, and awk, with BRE requiring escapes for metacharacters and ERE offering unescaped operators like + and ?. In the 1990s, Perl's powerful regex implementation, introduced in Perl 5 (1994), influenced broader adoption by adding features like non-greedy quantifiers and lookaheads; this led to the creation of Philip Hazel's Perl Compatible Regular Expressions (PCRE) library in 1997, which extended POSIX capabilities and became a de facto standard for many languages and applications. By the early 2000s, regular expressions permeated web technologies, with ECMAScript 3 (1999) integrating Perl-inspired syntax into JavaScript for client-side pattern matching in browsers.

Formal Theory

Formal Definition

In formal language theory, regular expressions provide a precise algebraic notation for describing regular languages. Over a finite Σ\Sigma, the set R(Σ)\mathcal{R}(\Sigma) of regular expressions is defined inductively as the smallest set satisfying the following conditions:
  • R(Σ)\emptyset \in \mathcal{R}(\Sigma), denoting the empty language {}\{\}.
  • εR(Σ)\varepsilon \in \mathcal{R}(\Sigma), denoting the singleton language {ε}\{\varepsilon\}.
  • For each aΣa \in \Sigma, aR(Σ)a \in \mathcal{R}(\Sigma), denoting the singleton language {a}\{a\}.
  • If R,SR(Σ)R, S \in \mathcal{R}(\Sigma), then (RS)R(Σ)(R \cup S) \in \mathcal{R}(\Sigma), denoting L(R)L(S)L(R) \cup L(S), the union of the languages denoted by RR and SS.
  • If R,SR(Σ)R, S \in \mathcal{R}(\Sigma), then (RS)R(Σ)(R \cdot S) \in \mathcal{R}(\Sigma), denoting {xyxL(R),yL(S)}\{xy \mid x \in L(R), y \in L(S)\}, the of the languages L(R)L(R) and L(S)L(S).
  • If RR(Σ)R \in \mathcal{R}(\Sigma), then (R)R(Σ)(R^*) \in \mathcal{R}(\Sigma), denoting the Kleene closure n=0L(R)n\bigcup_{n=0}^\infty L(R)^n, the set of all finite concatenations (including the empty ε\varepsilon) of strings from L(R)L(R).
Parentheses are used for grouping to resolve operator precedence, which conventionally prioritizes over over union. In , union is often \cup (or | in some texts), concatenation is or \cdot (frequently omitted), and Kleene star is ^*. This construction ensures that every regular expression RR denotes a unique L(R)ΣL(R) \subseteq \Sigma^*. The languages denoted by regular expressions are exactly the regular languages, as established by Kleene's theorem: a language is regular if and only if it is denoted by some regular expression. The forward direction follows from the fact that the empty language, singletons, and languages accepted by finite automata are closed under union, , and , via explicit constructions such as nondeterministic finite automata with ε\varepsilon-transitions for unions and stars. The converse is shown by deriving a regular expression from any finite automaton, for example, through state elimination or solving a based on the automaton's transitions.

Expressive Power and Compactness

Regular expressions possess exactly the expressive power to describe the class of , which are precisely the languages accepted by . This equivalence was established by Kleene, who showed that the languages generated by regular expressions coincide with those recognized by . Specifically, any can be expressed by a regular expression, and conversely, any regular expression defines a . This correspondence holds through conversions between regular expressions and nondeterministic (NFAs) or deterministic (DFAs), with key constructions including the Glushkov automaton, which builds an NFA with a linear number of states directly from the expression, and , which produces an epsilon-NFA suitable for efficient matching algorithms. Despite this power, regular expressions cannot capture non-regular languages, such as the L={anbnn0}L = \{ a^n b^n \mid n \geq 0 \}, which requires counting equal numbers of aas and bbs. This limitation is proven using the , which states that for any , there exists a pumping length pp such that any longer than pp can be divided into parts where one segment can be repeated arbitrarily without leaving the ; applying this to strings in LL of the form apbpa^p b^p leads to a contradiction, as pumping disrupts the equality of counts. In terms of compactness, regular expressions offer a succinct representation compared to finite automata, but conversions reveal trade-offs. The Thompson or Glushkov constructions yield NFAs with O(n)O(n) states for an expression of size nn, enabling practical matching without determinism. However, determinizing to a DFA via subset construction can cause exponential state explosion: there exist regular expressions of size O(n)O(n) whose minimal DFAs require Θ(2n)\Theta(2^n) states, as demonstrated by the language recognized by (a+b)a(a+b)n1(a + b)^* a (a + b)^{n-1}, where the DFA must track all possible positions after the distinguishing aa. This blowup underscores the benefits of NFA-based matching for in practice. Relative to other formalisms, regular expressions are weaker than context-free grammars, which generate the context-free languages including non-regular sets like {anbnn0}\{ a^n b^n \mid n \geq 0 \}, as per the . Yet, for regular languages, regular expressions provide a more concise notation than equivalent context-free grammars, which may require additional nonterminals and productions to enforce the same finite-state constraints without recursion. For instance, patterns like arbitrary repetitions of symbols are directly captured by the in expressions but demand rules in a non-recursive form.

Equivalence and Decidability

The equivalence problem for regular expressions—determining whether two expressions denote the same language—is decidable, as each can be converted to an equivalent via standard constructions, and equivalence is decidable by minimizing both to deterministic finite automata (DFAs) and comparing their state transitions. This decidability follows from Kleene's theorem establishing the equivalence between regular expressions and , with the conversion process yielding a (NFA) that can be determinized and minimized. Several algorithms address this problem directly on expressions or via automata. Brzozowski's derivative method computes successive derivatives of an expression with respect to input symbols, effectively constructing a DFA lazily; equivalence holds if the final derivative after processing a distinguishing string is empty for one but not the other, or if all corresponding derivatives match. Antichain algorithms optimize the subset construction during DFA conversion by pruning subsumed states, reducing the search space for equivalence checks on NFAs derived from expressions. Partial derivatives, introduced by Antimirov, partition an expression into simpler subexpressions based on prefix matches, enabling NFA construction and equivalence testing by comparing derivative sets recursively. The general complexity of regular expression equivalence is PSPACE-complete, requiring polynomial space but potentially exponential time due to the state explosion in NFA-to-DFA conversion. However, for star-free regular expressions (those without the operator), equivalence is decidable in polynomial time, as their languages correspond to aperiodic automata amenable to efficient minimization. In practice, these algorithms face exponential time and space costs in the worst case, arising from the inherent ambiguity of expressions like highly nested unions, which can produce NFAs with exponentially many reachable states. For instance, to verify that (a|b)*a and a(a|b)* denote the same language without expanding to full automata, one can use derivatives: the left derivative of both with respect to a yields (a|b)*, and subsequent derivatives with respect to b match symmetrically, confirming equivalence through recursive structural identity.

Practical Syntax

Core Syntax Elements

Regular expressions are typically delimited by specific characters or string boundaries depending on the programming language or tool. In Perl, patterns are enclosed within forward slashes, such as /pattern/, allowing the use of the backslash \ to escape special characters within the pattern. In Python's re module, regular expressions are provided as string literals delimited by quotes, like r"pattern", where the raw string prefix r prevents backslash escaping issues, and metacharacters are escaped with \ as needed. Basic metacharacters provide the foundational mechanisms for . The dot . matches any single character except . Anchors ^ and $ denote the start and end of the or line, respectively, ensuring matches occur at specific positions. Character classes, defined by square brackets [], match any one character from a specified set, such as [a-z] for lowercase letters. Within character classes, most characters are treated as literals and match themselves without escaping, including symbols such as @. Exceptions to this rule include the hyphen -, which forms a range when placed between two characters (e.g., [a-z] for lowercase ASCII letters) but is treated as literal when positioned first or last in the class (e.g., [-az] or [az-]), or escaped as \-; the caret ^, which negates the class if placed first (e.g., [^a-z] matches any character except lowercase letters) but is literal otherwise; the closing bracket ], which must be escaped as \] or placed first in the class to be matched literally; and the backslash \, which must be escaped as \\ to match literally or used for special sequences. This syntax is standard across most modern regular expression implementations, including Python's re module. For example, the Python pattern r'[@a-zA-Z0-9._%+-]' matches any alphanumeric character, @, ., _, %, +, or -, a character class commonly employed in email address validation patterns. Parentheses () create capturing groups, which both group subpatterns for application of operators and capture matched substrings for later reference. Quantifiers specify repetition of the preceding element, enabling concise descriptions of variable-length matches. The ? matches or one occurrence, * matches or more, and + matches one or more, with these being greedy by default to maximize the match length. The interval notation {n,m} matches between n and m occurrences (inclusive), where n and m are non-negative integers; omitting m as {n,} means n or more, and {n} means exactly n. Escaping allows literal matching of metacharacters or shorthand for common classes using backslashes. The sequence \d matches any ASCII digit [0-9], while \D negates it to match non-digits. Similarly, \w matches any ASCII word character (alphanumeric or underscore, equivalent to [a-zA-Z0-9_]), and \W matches non-word characters. To match a literal , it must be escaped as \\. Operator precedence in regular expressions follows a hierarchy to resolve ambiguities without explicit grouping. Parentheses have the highest precedence, allowing explicit control over subpattern evaluation. of adjacent elements binds more tightly than alternation, which uses the pipe | to match either left or right subpatterns and has the lowest precedence; for example, cat|dog matches "cat" or "dog," but ca(t|d)og ensures grouping.

POSIX Standards

The standards for regular expressions, specified in IEEE Std 1003.2-1992 as part of the broader .2 framework for shell and utilities, define two primary variants: Basic Regular Expressions (BRE) and Extended Regular Expressions (ERE). These standards promote portability in pattern matching across UNIX-like operating systems, with BRE serving as the foundation for traditional utilities like grep and sed, while ERE provides enhancements for more expressive patterns in tools like egrep. The definitions were later incorporated into .1-2008 for C library functions such as regcomp() and regexec(), ensuring consistent implementation. BRE, the more conservative variant, requires backslashes to escape metacharacters for grouping and repetition, reflecting the syntax of early UNIX tools. For instance, $abc$ captures the group "abc", and \{n,m\} matches between n and m repetitions of the preceding element. The sole unescaped quantifier is *, denoting zero or more occurrences. Other key metacharacters include . (any single character), ^ (start of line), $ (end of line), and bracket expressions [ ] for sets of characters. In BRE, characters like +, ?, and | are treated as literals unless escaped, but escaping them does not confer special meaning as quantifiers or operators. ERE builds on BRE by natively supporting additional operators without escapes, allowing for more readable and compact expressions. Grouping uses (abc) instead of $abc$, and repetition employs {n,m} directly. New quantifiers include + for one or more occurrences and ? for zero or one occurrence, while | enables alternation (e.g., cat|dog matches either "cat" or "dog"). Both variants retain * and support interval notation in their respective escaped or unescaped forms, but ERE's unescaped metacharacters—such as (, ), {, |, +, and ?—are only special outside bracket expressions. Both BRE and ERE incorporate character classes for locale-aware matching, using the portable bracketed notation [[:class:]]. Examples include [[:alpha:]] for alphabetic characters, [[:digit:]] for decimal digits (0-9), [[:space:]] for whitespace, and [[:alnum:]] for alphanumeric characters. These classes ensure consistent behavior across different locales and collating sequences, avoiding reliance on ASCII-specific ranges. The following table summarizes key differences in metacharacter handling between BRE and ERE:
FeatureBRE SyntaxERE Syntax
Grouping$ $( )
Repetition range\{n,m\}{n,m}
Zero or more**
One or moreNo native (use \{1,\})+
Zero or oneNo native (use \{0,1\})?
AlternationNo native`
Special outside brackets. * [ \ ^ $. ( ) { * + ? | ^ &#36;

Advanced Matching Features

In and Perl-Compatible Regular Expressions (PCRE), backreferences enable matching text previously captured by a group, using syntax such as \1 for the first group or \g{name} for named groups, allowing patterns to reference earlier substrings for validation or repetition. Lookahead assertions extend matching capabilities without consuming characters: positive lookahead (?=pattern) succeeds if the enclosed pattern follows the current position, while negative lookahead (?!pattern) succeeds if it does not, facilitating conditional matches like verifying a word boundary ahead. Lazy quantifiers modify greedy defaults by appending ? (e.g., *? or +?), instructing the to match the minimal number of repetitions necessary for the overall pattern to succeed, which contrasts with greedy quantifiers like * or + that expand maximally first. For example, in the pattern foo(.*?)bar applied to "foo baz quux bar", the lazy .*? captures only " baz quux" as the shortest between "foo" and "bar", whereas greedy .* would capture " baz quux " up to the last "bar". This behavior reduces unnecessary in complex patterns. Possessive quantifiers, introduced in 5.10 and supported in PCRE, append + to quantifiers (e.g., a++ or {n,m}+), matching greedily like their non-possessive counterparts but preventing into the quantified portion, which mitigates catastrophic in ambiguous patterns. For instance, the possessive a++b on "aaab" matches without retrying failed submatches, improving efficiency over a+b in scenarios with overlapping alternatives. PCRE, originally developed to emulate Perl's regex engine, has significantly influenced modern implementations, including those in Java's java.util.regex package and .NET's System.Text.RegularExpressions, by providing a standardized set of advanced features beyond basics. The IETF's I-Regexp, specified in RFC 9485 as a proposed standard in October 2023, aims to promote interoperability across regex engines by defining a limited, portable subset of features, including support for character properties via escapes like \p{Letters} within character classes. This format emphasizes predictability and avoids engine-specific extensions, incorporating standardized assertions such as anchors (^, $) and word boundaries (\b) to ensure consistent behavior in applications like JSONPath.

Implementations

Algorithms and Performance

Regular expressions are typically implemented using finite automata, with two primary approaches: nondeterministic finite automata (NFAs) and deterministic finite automata (DFAs). algorithm builds an NFA from a regular expression in linear time and space relative to the pattern length mm, producing a graph with O(m)O(m) states and transitions that directly mirrors the expression's through recursive composition of basic automata for literals, , union, and . This method ensures the NFA has no cycles other than those induced by s, facilitating efficient simulation. In contrast, converting an NFA to an equivalent DFA via the can result in an exponential blowup in state count, up to 2k2^k states where kk is the number of NFA states, though practical sizes are often smaller due to unreachable subsets. The matching process varies by implementation. Backtracking engines, common in recursive implementations, treat the pattern as a of choices and greedily explore paths, retracting () upon failure to try alternatives; this supports features like capturing groups but can lead to exponential time in the worst case. NFA simulation, as in Thompson's original approach, advances multiple states in parallel using a position set, achieving linear time O(mn)O(mn) for text length nn and pattern length mm by processing each input character once and tracking active states without . DFA-based matching is also O(mn)O(mn) but requires precomputing the full , which trades space for predictability. Worst-case running times highlight trade-offs: NFA and DFA simulations guarantee O(mn)O(mn) performance, but backtracking can degrade to exponential, as in catastrophic backtracking where nested quantifiers create an explosion of partial matches. For instance, the pattern (a+)+a\ againstastringofagainst a string ofnasforcestheenginetoexplore'a's forces the engine to exploreO(2^n)$ paths before failing, consuming excessive time and resources. Optimizations mitigate these issues. Regex caching precompiles and stores automata for repeated patterns, avoiding reconstruction costs in high-throughput scenarios. Integration of string-search heuristics like Boyer-Moore accelerates initial positioning by skipping irrelevant text segments based on pattern suffixes, particularly effective for literal-heavy expressions. Notable implementations include Google's RE2, which uses a DFA/NFA hybrid for linear-time guarantees and predictability, rejecting backtracking-prone features to ensure worst-case O(mn)O(mn) without exponential risks. In contrast, Oniguruma, the engine underlying Ruby's regex support, employs backtracking for full POSIX compatibility, offering flexibility at the potential cost of performance on ambiguous patterns.

Language and Tool Support

Regular expressions are natively supported in many high-level programming languages. In Python, the built-in re module provides pattern matching operations similar to those in Perl, supporting Unicode strings and a syntax that includes features like grouping, alternation, and quantifiers. Within character classes ([]), most characters are treated literally, including the @ symbol which requires no escaping; only - (unless at the start or end or escaped), ^ (if first), ], and \ have special handling or require escaping. For example, the pattern r'[@a-zA-Z0-9._%+-]' matches alphanumeric characters, @, ., _, %, +, or - and is commonly used in email address validation. Java's java.util.regex package, introduced in Java 1.4, implements a Perl-inspired syntax for compiling and matching patterns, with classes like Pattern and Matcher handling compilation and execution. JavaScript's RegExp object, defined in the ECMAScript standard, enables pattern matching with flags for global, case-insensitive, and Unicode-aware searches. Unix-like systems provide robust regular expression support through command-line tools adhering to POSIX standards. The grep utility supports Basic Regular Expressions (BRE) by default and Extended Regular Expressions (ERE) via the -E option, allowing pattern-based line filtering in files or streams. Tools like sed and awk also incorporate BRE and ERE for stream editing and text processing, with egrep historically serving as a dedicated interface for ERE until its deprecation in favor of grep -E. Several libraries extend regular expression capabilities to languages lacking native support or requiring enhanced features. Boost.Regex for C++ offers a comprehensive API for pattern compilation and matching, compatible with ECMAScript and Perl syntax, and serves as a precursor to the C++11 standard library's <regex> header. The International Components for Unicode (ICU) library provides cross-platform regular expression functionality with full Unicode support, including properties and collation-aware matching, available in C, C++, Java, and other bindings. Recent standards have introduced updates to improve and handling. 2024 added the /v flag to RegExp, enabling set notation and enhanced property escapes for more precise pattern definitions in environments. 2025 further enhanced regex with pattern modifiers (inline flags) for granular control over specific parts of expressions and support for duplicate named capture groups across alternatives. Post-2020 drafts culminated in the I-Regexp specification (RFC 9485), defining a limited, interoperable regex flavor for cross-engine compatibility, particularly in Path and web contexts. SQL Server 2025 introduced native regular expression support in T-SQL, including functions like REGEXP_LIKE for pattern matching in queries. Despite widespread adoption, gaps persist in native support and consistency. Low-level languages like C lack built-in regex facilities, necessitating external libraries such as PCRE or Oniguruma for implementation. Unicode support varies across engines, with levels defined by Unicode Technical Standard #18 ranging from basic code point matching (Level 1) to advanced features like grapheme clusters and tailoring (Level 3), leading to differences in behavior for international text. POSIX standards provide a foundational baseline for basic and extended regex in Unix tools, as detailed in dedicated sections.

Extensions

Unicode Integration

Regular expressions have been extended to support , enabling across the vast repertoire of international characters and scripts beyond ASCII limitations. Basic integration, as outlined in Unicode Technical Standard #18 (UTS #18), includes mechanisms for specifying s via escapes like \u{hhhh} for single characters or \U{hhhhhhhh} for larger values, allowing direct reference to any of 159,801 assigned s as of Unicode 17.0 (September 2025). In , the /u flag, introduced in 2015, activates this mode, treating surrogate pairs as single s and enabling full escapes, such as /[\u{1F600}-\u{1F64F}]/u to match faces. Property escapes, also part of UTS #18 Level 1, use syntax like \p{General_Category=Letter} or shorthand \p{L} to match characters based on properties such as scripts (\p{Script=Latin} or \p{Script=Latn} for ) or categories, providing a concise way to target linguistic classes without enumerating individual s. Extended Unicode support at UTS #18 Level 2 addresses user-perceived text units through clusters, which combine base characters with diacritics or modifiers into single visual units, as defined in Unicode Standard Annex #29 (UAX #29). In Perl-Compatible Regular Expressions (PCRE), the \X matches an extended cluster atomically, treating sequences like "é" (U+00E9) or "a" + combining (U+0061 U+0301) as one unit, which is essential for operations like word boundaries or line wrapping in international text. The (ICU) library similarly supports \X for cluster matching, ensuring compliance with UAX #29 boundaries in its regex engine. Unicode normalization handles equivalent representations of text, such as composed forms (NFC) versus decomposed forms (NFD), where "" in NFC (c a f U+00E9) should match "café" in NFD (c a f e U+0301). UTS #18 recommends normalizing input text to NFD or NFKD before matching to achieve canonical equivalence, though direct support in engines varies; for instance, ICU provides normalization utilities but limited built-in canonical equivalence in regex via the UREGEX_CANON_EQ flag, which remains unimplemented in recent versions. This approach ensures patterns like /café/i can match both forms when combined with appropriate preprocessing. Challenges in Unicode regex include case folding across diverse scripts and collation-sensitive matching. Full case folding, per UTS #18 Level 2, uses Unicode's CaseFolding.txt data to map characters like "ß" (U+00DF) to "ss" in case-insensitive modes, but handling ligatures or script-specific rules (e.g., Greek sigma's final form) can lead to inconsistencies without full implementation. Collation-sensitive matching, which aligns with locale-specific ordering beyond simple code point comparison, is not standardized in UTS #18 (Level 3 was retracted) and requires external tailoring, posing difficulties for global search-and-replace operations. Recent updates enhance regex capabilities in modern engines. 2018 expanded the /u to include property escapes like \p{Script=Grek}, with 2024 introducing the /v for advanced set operations on properties, such as nested classes and intersections. The ICU library achieves near-full UTS #18 Level 2 compliance, including comprehensive property support and grapheme clusters, making it a reference for tailored handling in applications like databases and text processors. The 2022 revision of UTS #18 further refined property complements and character class parsing to better accommodate evolving versions, addressing gaps in 2020s engines for precise international text processing.

Patterns Beyond Regular Languages

Modern regular expression engines extend beyond the formal definition of regular languages by incorporating features like zero-width assertions and backreferences, which enable matching more complex patterns but introduce computational challenges. Zero-width assertions allow a regex to test conditions at the current position in the input string without consuming characters. Positive lookahead, denoted as (?=pattern), matches if the pattern immediately follows the current position; for example, the regex foo(?=bar) matches "foo" only if followed by "bar". Negative lookahead (?!pattern) matches if the pattern does not follow. Similarly, positive lookbehind (?<=pattern) and negative lookbehind (?<!pattern) check preceding text, though many engines require fixed-length patterns in lookbehinds for implementation reasons. Word boundaries, such as \b, assert a transition between word characters (alphanumeric or underscore) and non-word characters, or string edges; for instance, \bword\b matches "word" as a whole word. These assertions originated in Perl and are now standard in engines like PCRE and .NET. Backreferences, such as \1 or \g{1}, refer to previously captured substrings from parenthesized groups, allowing the regex to match repeated content exactly. For example, the pattern (foo)\1 matches "foofoo" by capturing "foo" and requiring it to repeat. This feature enables recognition of non-regular languages; a classic case is matching palindromes like ^(.)(.*)\1$, which requires comparing characters across the string, impossible with finite automata. Backreferences were introduced in early regex implementations like those in Unix tools and , enhancing expressiveness for practical tasks like HTML tag matching. These extensions have significant implications for performance and theory. Traditional regular expressions can be matched in linear time using nondeterministic finite automata (NFA), but assertions and backreferences often rely on backtracking search, which can exhibit exponential time in the worst case due to repeated pattern trials. Moreover, testing equivalence between two such extended regexes is NP-hard, complicating optimization and verification, as shown in foundational work on pattern matching algorithms. Advanced engines further expand capabilities with recursive patterns and balancing groups. In PCRE, (?R) invokes the entire pattern recursively, enabling matching of nested structures like balanced parentheses: the pattern ^$([^()]++|(?R))*$$ matches strings such as (()) or ((a(b))) by recursively validating inner pairs. This handles context-free languages, such as properly nested delimiters. Similarly, .NET's balancing groups, using syntax like (?<name1>...) (?-name1), track opening and closing elements for arbitrary nesting, as in ^(?<open> \w+ ) (?(open) (?<close-1> ) | (?<close> ) )*$ for tags. These features render regex Turing-complete in engines like .NET, capable of simulating arbitrary computation through counters and , though practical limits prevent non-halting behavior. Control mechanisms like lazy quantifiers can briefly interact with these features to fine-tune matching greediness, as covered in advanced syntax.

Applications

Common Uses

Regular expressions are widely employed in text processing tasks, particularly for search and replace operations within popular code editors. In Vim, they enable efficient for actions such as substituting text across files, leveraging metacharacters to handle complex searches like finding variations in spellings or structures. Similarly, Visual Studio Code supports regular expressions in its find and replace functionality, allowing users to perform global searches across workspaces and manipulate capture groups for case adjustments during replacements. These capabilities facilitate rapid editing in development workflows, from refactoring code to cleaning unstructured data. A common application involves input validation, where patterns verify formats like addresses and . For email validation, a representative such as ^[\w\.-]+@[\w\.-]+\.\w+$ matches standard structures by checking for alphanumeric characters, dots, and hyphens before and after the "@" symbol, followed by a . URL validation follows analogous principles, using expressions to ensure protocols, domains, and paths conform to expected syntax, though overly rigid patterns may reject valid variations. In data extraction, regular expressions excel at parsing logs to identify timestamps, error codes, or IP addresses from unstructured entries, streamlining monitoring in systems like application servers. They also support tokenization in by splitting text into words or sentences based on delimiters and patterns, aiding preprocessing for models. However, extracting from requires caution, as its nested tags form a unsuitable for full with regular expressions, which are limited to regular languages and may fail on balanced structures like <div><p>text</p></div>. Within programming, regular expressions aid input sanitization by filtering malicious or malformed data, such as stripping scripts from user submissions to prevent injection attacks. In web frameworks like , they define paths, matching parameters against criteria—for instance, ensuring numeric IDs with patterns like (\d+)—to handle requests efficiently without exact matches. Beyond software, regular expressions apply to protocol analysis, where they scan network packets for signatures of anomalies or specific headers in intrusion detection systems. In bioinformatics, they search DNA sequences for motifs, such as promoter regions or restriction sites, using patterns that account for nucleotide ambiguities like "N" for any base. Despite their versatility, regular expressions have limitations for parsing context-free languages like XML, where nested elements require stack-based parsers rather than finite automata to handle arbitrary depths reliably. For such cases, dedicated tools like XML parsers are recommended to ensure correctness and avoid errors from incomplete matches.

Practical Examples

One practical application of regular expressions is validating email addresses, a common requirement in form processing and data entry systems. A widely used pattern for this purpose is \b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b, which matches a word boundary, followed by one or more alphanumeric characters or certain symbols before the @, then a with dots, and a of at least two letters, ending with a word boundary. For example, applied to the string "Contact us at [email protected] or [email protected].", this pattern matches "[email protected]" but not "[email protected]." due to the invalid domain structure. Another test string, "[email protected]", successfully matches the entire address, demonstrating tolerance for subdomains, plus signs in local parts, and longer TLDs. Extracting phone numbers from text, such as in customer records or scraped , often employs capturing groups to isolate components like the area . A standard pattern for U.S. phone numbers in the format XXX-XXX-XXXX is (\d{3})-(\d{3})-(\d{4}), where the first group captures the three-digit area , the second the three-digit exchange, and the third the four-digit . Testing on "Call 123-456-7890 for support or 987-654-3210.", this regex matches "123-456-7890" with groups "123", "456", and "7890", while ignoring the surrounding text; a second match extracts "987-654-3210" with groups "987", "654", and "3210". This approach allows programmatic access to the area via the first capture group for tasks like regional filtering. Parsing log files to extract structured data, such as log levels, timestamps, and messages, is essential in monitoring and applications. A representative for lines starting with a log level (INFO or ERROR), followed by a date and message, is ^(?:INFO|ERROR)\s+(\d{4}-\d{2}-\d{2})\s+(.+)$, using a non-capturing group for the level, capturing the ISO-like date in the first group, and the remaining message in the second. For the log entry "INFO 2023-11-08 User login successful", it matches the entire line with groups "2023-11-08" and "User login successful"; on "ERROR 2023-11-07 Database connection failed", the groups capture "2023-11-07" and "Database connection failed", enabling easy filtering by level or date in analysis tools. Regular expressions with Unicode support extend matching to international text, such as identifying words across scripts. The pattern \p{L}+ uses a Unicode property escape to match one or more letters from any language, where \p{L} denotes the Letter category in . Applied to "Hello café こんにちは world", it matches "Hello", "café", "こんにちは", and "world" separately, capturing alphabetic sequences while skipping spaces and non-letter characters; in contrast, a basic [a-zA-Z]+ would only match "Hello" and "world", ignoring accented or non-Latin scripts. This facilitates global text processing, like tokenization in multilingual search engines. Demonstrating backtracking behavior, quantifiers in regular expressions can be greedy (maximizing matches) or lazy (minimizing them), affecting how the engine processes ambiguous inputs. Consider the string "aaab" and the pattern a+b: the greedy a+ initially consumes all three 'a's, then matches the final 'b' without backtracking, resulting in a full match of "aaab". Switching to the lazy version a+?b, the engine starts with the minimal one 'a', attempts to match 'b' (failing on the next 'a'), expands to two 'a's (still failing), then to three 'a's, and finally matches the 'b', also capturing the entire "aaab" but via incremental expansion rather than maximal grab—highlighting how laziness influences the matching path in engines that backtrack, potentially impacting performance on longer strings.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.