Recent from talks
Nothing was collected or created yet.
Imperative programming
View on WikipediaThis article needs additional citations for verification. (October 2011) |
In computer science, imperative programming is a programming paradigm of software that uses statements that change a process' state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates step by step (with general order of the steps being determined in source code by the placement of statements one below the other),[1] rather than on high-level descriptions of its expected results.
The term is often used in contrast to declarative programming, which focuses on what the program should accomplish without specifying all the details of how the program should achieve the result.[2]
Procedural programming
[edit]Procedural programming is a type of imperative programming in which the program is built from one or more procedures (also termed subroutines or functions). The terms are often used as synonyms, but the use of procedures has a dramatic effect on how imperative programs appear and how they are constructed. Heavy procedural programming, in which state changes are localized to procedures or restricted to explicit arguments and returns from procedures, is a form of structured programming. Since the 1960s, structured programming and modular programming in general have been promoted as techniques to improve the maintainability and overall quality of imperative programs. The concepts behind object-oriented programming attempt to extend this approach.
Procedural programming could be considered a step toward declarative programming. A programmer can often tell, simply by looking at the names, arguments, and return types of procedures (and related comments), what a particular procedure is supposed to do, without necessarily looking at the details of how it achieves its result. At the same time, a complete program is still imperative since it fixes the statements to be executed and their order of execution to a large extent.
Rationale and foundations of imperative programming
[edit]The programming paradigm used to build programs for almost all computers typically follows an imperative model.[note 1] Digital computer hardware is designed to execute machine code, which is native to the computer and is usually written in the imperative style, although low-level compilers and interpreters using other paradigms exist for some architectures such as Lisp machines.
From this low-level perspective, the program state is defined by the contents of memory, and the statements are instructions in the native machine language of the computer. Higher-level imperative languages use variables and more complex statements, but still follow the same paradigm. Recipes and process checklists, while not computer programs, are also familiar concepts that are similar in style to imperative programming; each step is an instruction, and the physical world holds the state. Since the basic ideas of imperative programming are both conceptually familiar and directly embodied in the hardware, most computer languages are in the imperative style.
Assignment statements, in imperative paradigm, perform an operation on information located in memory and store the results in memory for later use. High-level imperative languages, in addition, permit the evaluation of complex expressions, which may consist of a combination of arithmetic operations and function evaluations, and the assignment of the resulting value to memory. Looping statements (as in while loops, do while loops, and for loops) allow a sequence of statements to be executed multiple times. Loops can either execute the statements they contain a predefined number of times, or they can execute them repeatedly until some condition is met. Conditional branching statements allow a sequence of statements to be executed only if some condition is met. Otherwise, the statements are skipped and the execution sequence continues from the statement following them. Unconditional branching statements allow an execution sequence to be transferred to another part of a program. These include the jump (called goto in many languages), switch, and the subprogram, subroutine, or procedure call (which usually returns to the next statement after the call).
Early in the development of high-level programming languages, the introduction of the block enabled the construction of programs in which a group of statements and declarations could be treated as if they were one statement. This, alongside the introduction of subroutines, enabled complex structures to be expressed by hierarchical decomposition into simpler procedural structures.
Many imperative programming languages (such as Fortran, BASIC, and C) are abstractions of assembly language.[3]
History of imperative and object-oriented languages
[edit]The earliest imperative languages were the machine languages of the original computers. In these languages, instructions were very simple, which made hardware implementation easier but hindered the creation of complex programs. Fortran, developed by John Backus at International Business Machines (IBM) starting in 1954, was the first major programming language to remove the obstacles presented by machine code in the creation of complex programs. Fortran was a compiled language that allowed named variables, complex expressions, subprograms, and many other features now common in imperative languages. The next two decades saw the development of many other major high-level imperative programming languages. In the late 1950s and 1960s, ALGOL was developed in order to allow mathematical algorithms to be more easily expressed and even served as the operating system's target language for some computers. MUMPS (1966) carried the imperative paradigm to a logical extreme, by not having any statements at all, relying purely on commands, even to the extent of making the IF and ELSE commands independent of each other, connected only by an intrinsic variable named $TEST. COBOL (1960) and BASIC (1964) were both attempts to make programming syntax look more like English. In the 1970s, Pascal was developed by Niklaus Wirth, and C was created by Dennis Ritchie while he was working at Bell Laboratories. Wirth went on to design Modula-2 and Oberon. For the needs of the United States Department of Defense, Jean Ichbiah and a team at Honeywell began designing Ada in 1978, after a 4-year project to define the requirements for the language. The specification was first published in 1983, with revisions in 1995, 2005, and 2012.
The 1980s saw a rapid growth in interest in object-oriented programming. These languages were imperative in style, but added features to support objects. The last two decades of the 20th century saw the development of many such languages. Smalltalk-80, originally conceived by Alan Kay in 1969, was released in 1980, by the Xerox Palo Alto Research Center (PARC). Drawing from concepts in another object-oriented language—Simula (which is considered the world's first object-oriented programming language, developed in the 1960s)—Bjarne Stroustrup designed C++, an object-oriented language based on C. Design of C++ began in 1979 and the first implementation was completed in 1983. In the late 1980s and 1990s, the notable imperative languages drawing on object-oriented concepts were Perl, released by Larry Wall in 1987; Python, released by Guido van Rossum in 1990; Visual Basic and Visual C++ (which included Microsoft Foundation Class Library (MFC) 2.0), released by Microsoft in 1991 and 1993 respectively; PHP, released by Rasmus Lerdorf in 1994; Java, by James Gosling (Sun Microsystems) in 1995, JavaScript, by Brendan Eich (Netscape), and Ruby, by Yukihiro "Matz" Matsumoto, both released in 1995. Microsoft's .NET Framework (2002) is imperative at its core, as are its main target languages, VB.NET and C# that run on it; however Microsoft's F#, a functional language, also runs on it.
Examples
[edit]Fortran
[edit]Fortran (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:
It succeeded because:
- programming and debugging costs were below computer running costs
- it was supported by IBM
- applications at the time were scientific.[4]
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler.[4] The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:
COBOL
[edit]COBOL (1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced.[5] The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.[6]
COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.[6]
ALGOL
[edit]ALGOL (1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design.[7] Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable structured design. ALGOL was first to define its syntax using the Backus–Naur form.[7] This led to syntax-directed compilers. It added features like:
- block structure, where variables were local to their block
- arrays with variable bounds
- "for" loops
- functions
- recursion[7]
ALGOL's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch there's C, C++ and Java.[7]
BASIC
[edit]BASIC (1964) stands for "Beginner's All Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn.[8] If a student did not go on to a more powerful language, the student would still remember BASIC.[8] A BASIC interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.[8]
BASIC pioneered the interactive session.[8] It offered operating system commands within its environment:
- The 'new' command created an empty slate
- Statements evaluated immediately
- Statements could be programmed by preceding them with a line number
- The 'list' command displayed the program
- The 'run' command executed the program
However, the BASIC syntax was too simple for large programs.[8] Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.[9]
C
[edit]C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C." Its purpose was to write the UNIX operating system.[10] C is a relatively small language -- making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s.[10] Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:
- inline assembler
- arithmetic on pointers
- pointers to functions
- bit operations
- freely combining complex operators[10]

C allows the programmer to control in which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function.
- The global and static data region is located just above the program region. (The program region is technically called the text region. It's where machine instructions are stored.)
- The global and static data region is technically two regions.[11] One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored.
- Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process.
- The global and static region stores the global variables that are declared on top of (outside) the
main()function.[12] Global variables are visible tomain()and every other function in the source code.
- On the other hand, variable declarations inside of
main(), other functions, or within{}block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of function definitions.[13] They provide an interface to the function.- Local variables declared using the
staticprefix are also stored in the global and static data region.[11] Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the functionint increment_counter() { static int counter = 0; counter++; return counter; }
- Local variables declared using the
- The stack region is a contiguous block of memory located near the top memory address.[14] Variables placed in the stack are populated from top to bottom.[14] A stack pointer is a special-purpose register that keeps track of the last memory address populated.[14] Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction.
- Local variables declared without the
staticprefix, including formal parameter variables,[15] are called automatic variables[12] and are stored in the stack.[11] They are visible inside the function or block and lose their scope upon exiting the function or block. - The heap region is located below the stack.[11] It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks.[16] Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet.
- C provides the
malloc()library function to allocate heap memory.[17] Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.
- Local variables declared without the
C++
[edit]In the 1970s, software engineers needed language support to break large projects down into modules.[18] One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract datatypes.[18] At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Concrete datatypes have their representation as part of their name.[19] Abstract datatypes are structures of concrete datatypes — with a new name assigned. For example, a list of integers could be called integer_list.
In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class, it's called an object.[20]
Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming.[21] A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.[22]
Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other persons don't have. Object-oriented languages model subset/superset relationships using inheritance.[23] Object-oriented programming became the dominant language paradigm by the late 1990s.[18]
C++ (1985) was originally called "C with Classes."[24] It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.[25]
An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:
// grade.h
// -------
// Used to allow multiple source files to include
// this header file without duplication errors.
// See: https://en.wikipedia.org/wiki/Include_guard
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H
class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );
// This is a class variable.
// -------------------------
char letter;
// This is a member operation.
// ---------------------------
int grade_numeric( const char letter );
// This is a class variable.
// -------------------------
int numeric;
};
#endif
A constructor operation is a function with the same name as the class name.[26] It is executed when the calling operation executes the new statement.
A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:
// grade.cpp
// ---------
#include "grade.h"
GRADE::GRADE( const char letter )
{
// Reference the object using the keyword 'this'.
// ----------------------------------------------
this->letter = letter;
// This is Temporal Cohesion
// -------------------------
this->numeric = grade_numeric( letter );
}
int GRADE::grade_numeric( const char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
Here is a C++ header file for the PERSON class in a simple school application:
// person.h
// --------
#ifndef PERSON_H
#define PERSON_H
class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
Here is a C++ source file for the PERSON class in a simple school application:
// person.cpp
// ----------
#include "person.h"
PERSON::PERSON ( const char *name )
{
this->name = name;
}
Here is a C++ header file for the STUDENT class in a simple school application:
// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
// A STUDENT is a subset of PERSON.
// --------------------------------
class STUDENT : public PERSON{
public:
STUDENT ( const char *name );
~STUDENT();
GRADE *grade;
};
#endif
Here is a C++ source file for the STUDENT class in a simple school application:
// student.cpp
// -----------
#include "student.h"
#include "person.h"
STUDENT::STUDENT ( const char *name ):
// Execute the constructor of the PERSON superclass.
// -------------------------------------------------
PERSON( name )
{
// Nothing else to do.
// -------------------
}
STUDENT::~STUDENT()
{
// deallocate grade's memory
// to avoid memory leaks.
// -------------------------------------------------
delete this->grade;
}
Here is a driver program for demonstration:
// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"
int main( void )
{
STUDENT *student = new STUDENT( "The Student" );
student->grade = new GRADE( 'a' );
std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
// deallocate student's memory
// to avoid memory leaks.
// -------------------------------------------------
delete student;
return 0;
}
Here is a makefile to compile everything:
# makefile
# --------
all: student_dvr
clean:
rm student_dvr *.o
student_dvr: student_dvr.cpp grade.o student.o person.o
c++ student_dvr.cpp grade.o student.o person.o -o student_dvr
grade.o: grade.cpp grade.h
c++ -c grade.cpp
student.o: student.cpp student.h
c++ -c student.cpp
person.o: person.cpp person.h
c++ -c person.cpp
See also
[edit]Notes
[edit]- ^ Reconfigurable computing is a notable exception.
References
[edit]- ^ Jain, Anisha (2022-12-10). "Javascript Promises— Is There a Better Approach?". Medium. Archived from the original on 2022-12-20. Retrieved 2022-12-20.
- ^ "Imperative programming: Overview of the oldest programming paradigm". IONOS Digitalguide. 21 May 2021. Archived from the original on 2022-05-03. Retrieved 2022-05-03.
- ^ Bruce Eckel (2006). Thinking in Java. Pearson Education. p. 24. ISBN 978-0-13-187248-6.
- ^ a b Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 16. ISBN 0-201-71012-9.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 24. ISBN 0-201-71012-9.
- ^ a b Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 25. ISBN 0-201-71012-9.
- ^ a b c d Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 19. ISBN 0-201-71012-9.
- ^ a b c d e Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 30. ISBN 0-201-71012-9.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 31. ISBN 0-201-71012-9.
- ^ a b c Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 37. ISBN 0-201-71012-9.
- ^ a b c d "Memory Layout of C Programs". 12 September 2011. Archived from the original on 6 November 2021. Retrieved 25 May 2022.
- ^ a b Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition. Prentice Hall. p. 31. ISBN 0-13-110362-8.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 128. ISBN 0-201-71012-9.
- ^ a b c Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 121. ISBN 978-1-59327-220-3.
- ^ Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 122. ISBN 978-1-59327-220-3.
- ^ Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition. Prentice Hall. p. 185. ISBN 0-13-110362-8.
- ^ Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition. Prentice Hall. p. 187. ISBN 0-13-110362-8.
- ^ a b c Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 38. ISBN 0-201-71012-9.
- ^ Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 65. ISBN 978-0-321-56384-2.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 193. ISBN 0-201-71012-9.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 39. ISBN 0-201-71012-9.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 35. ISBN 0-201-71012-9.
- ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 192. ISBN 0-201-71012-9.
- ^ Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 22. ISBN 978-0-321-56384-2.
- ^ Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 21. ISBN 978-0-321-56384-2.
- ^ Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 49. ISBN 978-0-321-56384-2.
- Pratt, Terrence W. and Marvin V. Zelkowitz. Programming Languages: Design and Implementation, 3rd ed. Englewood Cliffs, N.J.: Prentice Hall, 1996.
- Sebesta, Robert W. Concepts of Programming Languages, 3rd ed. Reading, Mass.: Addison-Wesley Publishing Company, 1996.
- Originally based on the article 'Imperative programming' by Stan Seibert, from Nupedia, licensed under the GNU Free Documentation License.
Imperative programming
View on GrokipediaOverview
Definition
Imperative programming is a programming paradigm in which programs are composed of sequences of commands or statements that explicitly describe how to perform computations by modifying the program's state through operations such as assignments and updates to variables.[1] This approach structures code as a series of step-by-step instructions executed in a specific order, directly manipulating memory locations to achieve the desired outcome.[9] In contrast to declarative programming, which focuses on specifying what the program should accomplish without detailing the control flow or steps involved, imperative programming emphasizes the "how" of computation by explicitly outlining the sequence of actions needed to transform inputs into outputs.[5] This distinction highlights imperative programming's reliance on mutable state and explicit sequencing, whereas declarative paradigms prioritize descriptions of relationships or goals, leaving the execution details to the underlying system.[1] Imperative programming closely aligns with the von Neumann architecture, the foundational model for most modern computers, where programs consist of step-by-step instructions that mirror the machine's fetch-execute cycle, accessing and altering shared memory for both data and code.[10] This architecture's design, featuring a central processor that sequentially executes commands from memory, naturally supports the imperative model's emphasis on ordered state changes and direct hardware emulation in software.[3]Key Characteristics
Imperative programming is distinguished by its reliance on mutable state, where variables and data structures can be altered during execution to represent evolving computational states. This mutability allows programs to maintain and update internal representations of data, facilitating complex algorithms that track changes over time. For instance, a variable might initially hold one value and later be reassigned based on intermediate results, enabling the encoding of dynamic information directly in memory cells.[11] A fundamental aspect of this paradigm is sequential execution, in which programs are structured as ordered sequences of statements executed one after another, with explicit control flow mechanisms like loops and conditionals directing the order of operations. This step-by-step approach mirrors the linear processing typical of computer hardware, ensuring that each instruction modifies the program's state predictably before proceeding to the next. The design of imperative languages draws directly from the von Neumann architecture, which separates instructions from data but executes them sequentially to update machine state.[3][12] Central to state management in imperative programming is the assignment operation, serving as the primary mechanism for effecting changes, commonly denoted as variable = expression. This operation evaluates the right-hand side and stores the result in the named location, directly altering the program's observable behavior and enabling side effects such as input/output interactions or global modifications.[13] Unlike functional programming, which prioritizes immutability and referential transparency to eliminate side effects, imperative programming embraces them as essential for efficiency and expressiveness in tasks involving external resources or persistent changes. In imperative styles, state modifications and ordered execution are crucial, whereas functional approaches minimize their importance to focus on composable computations without altering external state.[14]Theoretical Foundations
Rationale
Imperative programming aligns closely with human cognitive processes by emphasizing sequential, step-by-step instructions that mirror the natural way individuals break down problems into ordered actions, much like following a recipe or outlining a procedure. This approach allows programmers to express algorithms in a linear fashion, making it straightforward to conceptualize and implement solutions that reflect everyday reasoning. As a result, imperative programming is particularly accessible for beginners, who can intuitively grasp concepts such as variables and control flow without needing to abstract away from direct command sequences.[15][8][9] From a practical standpoint, the paradigm's design provides significant hardware efficiency, as its constructs—such as assignments and loops—map directly to the basic operations of central processing units, enabling fine-grained control over memory and execution for high performance. This direct correspondence stems from its foundational influence by the Von Neumann architecture, where instructions and data reside in a shared memory space, facilitating efficient translation to machine code.[16][17][9] Despite these strengths, imperative programming involves trade-offs: the explicit management of state changes aids debugging through clear traceability of program flow, yet it can heighten complexity in large-scale systems, where mutable variables and intricate interdependencies often lead to challenges in maintenance and scalability.[8]Computational Basis
Imperative programming finds its computational foundation in the Turing machine model, introduced by Alan Turing in 1936 as a theoretical device capable of simulating any algorithmic process through a series of discrete state transitions. A Turing machine consists of a finite set of states, a tape serving as unbounded memory, and a read-write head that moves along the tape according to a fixed set of transition rules based on the current state and symbol read. Imperative programs emulate this by maintaining an internal state—such as variables and memory locations—that evolves step-by-step through explicit instructions like assignments and conditionals, effectively replicating the finite control and mutable storage of the Turing machine to perform arbitrary computations.[18] To incorporate imperative features into functional paradigms, extensions to the pure lambda calculus introduce mutable bindings and state manipulation, bridging the gap between applicative-order evaluation and side-effecting operations. The lambda calculus, originally developed by Alonzo Church, models computation purely through function abstraction and application without explicit state, but imperative extensions add constructs like assignment and sequencing to simulate mutable variables as transformations on an underlying state environment. A seminal exploration of this is provided by Steele and Sussman, who demonstrate how imperative constructs such as GOTO statements, assignments, and coroutines can be encoded within an extended lambda calculus using continuations and applicative-order reduction, thus showing the expressiveness of lambda-based models for imperative programming.[19] The Church-Turing thesis underpins the universality of imperative programming by asserting that any function computable by an effective procedure is computable by a Turing machine, with imperative languages achieving this through sequential state modifications that mirror the machine's transitions. Formulated independently by Church and Turing in the 1930s, the thesis equates effective calculability with Turing computability, implying that imperative programs, by manipulating state in a deterministic, step-wise manner, can simulate any Turing machine and thus compute any recursive function. This establishes imperative style as a practical embodiment of universal computation, where state changes enable the realization of all effectively computable processes without reliance on non-deterministic oracles.[20] In formal semantics, imperative languages are rigorously defined using denotational approaches that interpret programs as state transformers, mapping initial machine states to resulting states or sets of possible outcomes. Developed through the Scott-Strachey framework in the 1970s, this method assigns mathematical meanings to syntactic constructs in a compositional manner, treating statements as monotone functions from state spaces to state spaces (or powersets thereof for non-determinism). For instance, an assignment like denotes a partial function that updates the state by evaluating in the current state and modifying the binding for , while sequencing composes such transformers. Joseph Stoy's comprehensive treatment elucidates how this model handles the observable behavior of imperative programs by focusing on input-output relations over states, providing a foundation for proving properties like equivalence and correctness.[21]Historical Development
Early Origins
The origins of imperative programming trace back to the 1940s, when early efforts sought to formalize sequences of instructions for computational tasks. Konrad Zuse, a German engineer, developed Plankalkül between 1943 and 1945 as a high-level notation for engineering calculations, featuring imperative constructs such as loops, conditionals, and subroutines to manipulate variables and perform arithmetic operations.[22] This design emphasized step-by-step execution of commands to achieve desired outcomes, predating widespread computer implementation but laying conceptual groundwork for imperative styles.[22] Hardware developments in the mid-1940s further shaped imperative programming through the need for explicit instruction sequences. The ENIAC, completed in 1945 by John Presper Eckert and John Mauchly, initially relied on wired panels and switches for programming, requiring programmers to configure control flows manually for tasks like ballistic computations.[23] Its conversion in 1948 to a stored-program configuration, influenced by John von Neumann's 1945 EDVAC report, enabled instructions to be held in memory alongside data, promoting sequential execution models central to imperative paradigms.[24] This von Neumann architecture, with its unified memory for programs and data, provided the enabling framework for imperative instruction streams. In 1949, John Mauchly proposed Short Code, an early interpretive system for the BINAC computer, marking the first compiler-like tool for imperative programming.[25] Designed to translate simple arithmetic and control statements into machine instructions, it allowed programmers to write sequences like addition or branching without direct hardware manipulation, bridging low-level coding toward higher abstraction.[25] Implemented by William Schmitt, Short Code ran on the BINAC and later influenced UNIVAC systems, demonstrating imperative programming's practicality for scientific computation.[26] Assembly languages emerged concurrently in the late 1940s as a low-level imperative intermediary, using mnemonic codes to represent machine instructions and facilitating sequential program assembly. For instance, early assemblers for machines like the Manchester Mark 1 in 1948 translated symbolic operations into binary, easing the burden of pure machine code while retaining direct control over state changes and execution order.[27] This approach served as a foundational bridge to higher-level imperative languages, emphasizing explicit commands to manipulate processor registers and memory.[27]Mid-20th Century Advances
The mid-20th century marked a pivotal era in imperative programming, characterized by the creation of high-level languages that shifted focus from machine-specific instructions to more abstract, domain-oriented constructs, thereby accelerating software development for scientific, business, and educational applications. Fortran, released by IBM in 1957, represented the first widely adopted high-level imperative language, specifically tailored for scientific computing on systems like the IBM 704.[28] Developed under John Backus's leadership, it aimed to drastically reduce the effort required to program complex numerical problems by providing imperative features such as loops, conditional statements, and array operations that mirrored mathematical notation.[29] This innovation enabled programmers to express computations in a more natural, step-by-step manner, significantly boosting productivity in engineering and research fields. In 1959, COBOL (Common Business-Oriented Language) was introduced as an imperative language optimized for business data processing, featuring verbose, English-like syntax to enhance readability among non-specialist users.[30] Spearheaded by a CODASYL committee including Grace Hopper, it supported imperative operations for file handling, report generation, and arithmetic on business records, standardizing practices across diverse hardware platforms.[31] COBOL's design emphasized sequential execution and data manipulation, making it a cornerstone for enterprise applications. ALGOL's evolution from its 1958 proposal through ALGOL 60 (1960) and ALGOL 68 (1968) introduced foundational imperative concepts like block structure, which delimited scopes for variables and statements to promote modularity. Additionally, it pioneered lexical (static) scoping, ensuring variable bindings were resolved based on textual position rather than runtime dynamics, thus improving predictability and maintainability in imperative code.[32] These advancements, formalized in international reports, influenced countless subsequent languages by establishing rigorous syntax for control flow and data localization. To broaden access, BASIC (Beginner's All-Purpose Symbolic Instruction Code) was developed in 1964 by John Kemeny and Thomas Kurtz at Dartmouth College as a streamlined imperative language for time-sharing systems.[33] With simple syntax and interactive execution, it targeted educational use, allowing novices to write imperative programs involving basic assignments, branches, and loops without deep hardware knowledge. This accessibility democratized programming, fostering its adoption in teaching and early personal computing.Core Concepts
State Management
In imperative programming, variables serve as the primary mechanism for holding and modifying program state, acting as abstractions of memory cells that store values which can be accessed and altered during execution. Declaration typically involves specifying a variable's name and type, such asint x; in C, which allocates space for the variable without assigning an initial value.[34] Initialization follows by assigning an initial value, for example int x = 0;, ensuring the variable begins in a defined state to avoid undefined behavior.[35] Reassignment, often via the assignment operator like x = 5;, allows the variable's value to change, directly updating the program's state and enabling mutable computations central to the paradigm.[36]
The scope and lifetime of variables determine their visibility and duration in memory, distinguishing local from global variables to manage state isolation and persistence. Local variables, declared within a function or block, have scope limited to that enclosing region, promoting encapsulation by preventing unintended interactions with outer code; their lifetime is typically tied to the stack, where they are automatically allocated upon entry and deallocated upon exit, as in void func() { int local_var = 10; }.[37] Global variables, declared outside functions, possess program-wide scope and static lifetime, residing in a fixed memory segment accessible throughout execution, which facilitates shared state but risks naming conflicts and maintenance issues.[38] Heap allocation, invoked dynamically via operations like malloc in C, extends lifetime beyond scope, allowing variables to persist until explicitly freed, thus supporting flexible data structures like linked lists.
Imperative languages employ linear memory models, where the computer's address space is treated as a contiguous array of bytes, enabling direct manipulation through addresses for efficient state access. This Von Neumann architecture underpins imperative programming, separating instructions from data in a unified memory, with variables mapped to specific addresses for sequential or random access.[39] Pointers extend this model by storing memory addresses themselves, as in int *ptr = &x;, permitting indirect reference and modification of state, which is essential for operations like array traversal or dynamic data structures but introduces risks such as dangling references if mismanaged.[40]
State changes via variable modifications introduce side effects, where an operation alters the global program state beyond its primary computation, affecting predictability and requiring careful ordering for reliable behavior. In imperative code, functions may modify variables outside their local scope, such as incrementing a global counter, leading to interdependent execution where the order of statements influences outcomes and can complicate debugging or parallelization.[38] These side effects enhance expressiveness for tasks like I/O or simulations but demand explicit sequencing to maintain determinism, as unpredictable interactions can arise from shared mutable state.[41] Assignment operations exemplify this, directly reassigning values to propagate changes across the program.[42]
Control Structures
In imperative programming, statements are executed sequentially by default, following the linear order in which they are written in the source code. This fundamental control mechanism reflects the stored-program concept of the von Neumann architecture, where instructions are fetched, decoded, and executed one at a time in a predictable sequence, forming the basis for algorithmic description through step-by-step operations.[43] Conditional branching enables decision-making by evaluating boolean expressions to direct program flow. The if-then-else construct is the canonical form, where execution proceeds to the "then" block if the condition holds true, or to the "else" block otherwise; nested or chained conditions allow complex logic without unstructured jumps. This structured alternative to goto statements was standardized in ALGOL 60, promoting readable and maintainable code by avoiding arbitrary transfers of control.[44] For example, in pseudocode:if (x > 0) then
y = x * 2
else
y = x * -1
end if
if (x > 0) then
y = x * 2
else
y = x * -1
end if
for i from 1 to 10 do
sum = sum + i
end for
for i from 1 to 10 do
sum = sum + i
end for
try
divide(a, b)
catch (DivisionByZero e)
log("Error: " + e.message)
return default_value
end try
try
divide(a, b)
catch (DivisionByZero e)
log("Error: " + e.message)
return default_value
end try
Modularity
Modularity in imperative programming refers to the practice of dividing a program into smaller, independent components that can be developed, tested, and maintained separately, thereby enhancing reusability and manageability. This approach allows programmers to structure code around sequences of imperative statements while promoting abstraction and reducing complexity in large systems. By encapsulating related operations, modularity facilitates code reuse across different parts of a program or even in separate projects, aligning with the paradigm's emphasis on explicit control over program state and execution flow.[48] Procedures and subroutines form the foundational units of modularity in imperative programming, serving as named blocks of code that perform specific tasks and can be invoked multiple times to avoid duplication. A procedure typically accepts parameters—values passed at invocation to customize its behavior—and may produce return values to communicate results back to the calling code, enabling flexible reuse without rewriting logic. Subroutines, often synonymous with procedures in early imperative contexts, similarly encapsulate imperative instructions, such as assignments and control structures, to execute a defined sequence while preserving the overall program's state management. This mechanism supports hierarchical decomposition, where complex tasks are broken into simpler, reusable subunits.[49][50] A key distinction exists between functions and procedures in imperative languages, primarily in their handling of state and outputs. Functions are designed to compute and return a value based on inputs, ideally avoiding side effects on external state to ensure predictability and composability, whereas procedures primarily execute actions that may modify program state through side effects without necessarily returning a value. This separation encourages pure computation in functions for reuse in expressions, while procedures handle imperative operations like input/output or updates, reflecting the paradigm's focus on mutable state. For instance, in languages enforcing this divide, functions remain referentially transparent, aiding modular verification.[51] Libraries and modules extend modularity by allowing the import of pre-defined collections of procedures, functions, and data into a program, providing reusable imperative units without exposing their internal implementation. A module acts as a namespace encapsulating related components, enabling programmers to link external code that performs common tasks, such as mathematical operations or data handling, while maintaining separation of concerns. Libraries, often compiled separately, promote large-scale reuse by bundling tested imperative routines, reducing development time and ensuring consistency across applications. This import mechanism supports the construction of complex systems from verified building blocks.[10][52] Encapsulation basics in imperative modular designs involve hiding internal state and implementation details within procedures or modules, exposing only necessary interfaces to prevent unintended interactions and simplify maintenance. By restricting access to local variables and logic, encapsulation enforces information hiding, where the calling code interacts solely through parameters and return values, shielding it from changes in the module's internals. This principle reduces coupling between components, allowing modifications to one module without affecting others, and supports scalable imperative programming by minimizing global state dependencies. Seminal work on this emphasizes decomposing systems based on information hiding criteria to maximize flexibility and comprehensibility.[48][53]Programming Styles
Procedural Approach
Procedural programming represents a fundamental style within the imperative paradigm, emphasizing the organization of code into discrete procedures or subroutines that encapsulate specific operations while treating data as separate entities accessible across these units. This approach structures programs as a sequence of instructions executed step by step, with procedures invoked to perform reusable tasks, thereby promoting reusability and clarity in control flow. According to definitions in software engineering literature, procedural programs process input data sequentially through these procedures until completion, often involving initialization, main execution, and cleanup phases.[54] A key aspect of procedural programming is top-down design, a methodology where complex problems are decomposed hierarchically starting from a high-level overview and progressively refining into smaller, manageable procedures. This technique, integral to structured programming, allows developers to outline the overall program structure first—such as a main routine orchestrating subordinate functions—before detailing implementations, facilitating systematic development and debugging. Pioneered in the context of imperative languages, top-down design aligns with principles advocated by Edsger W. Dijkstra in his foundational work on structured programming, which emphasized hierarchical control to eliminate unstructured jumps like goto statements.[55] Early principles of data hiding in procedural programming emerged as a means to achieve separation of concerns by localizing implementation details within procedures or modules, without relying on object-oriented mechanisms. This involves restricting access to certain data or algorithms to specific procedures, using techniques like passing parameters and returning values to avoid global dependencies, which reduces coupling and enhances maintainability. David Parnas formalized these ideas in his seminal 1972 paper, introducing information hiding as a criterion for module decomposition, where each module conceals volatile design decisions to minimize ripple effects from changes. In practice, the flow of a procedural program typically begins with a main procedure that initializes data and sequentially calls subordinate procedures to handle subtasks, such as input processing followed by computation and output generation. For instance, the main routine might invoke a procedure to read data, another to perform calculations on that data, and a final one to display results, ensuring a linear yet modular execution path. This structure exemplifies the paradigm's reliance on procedural calls to manage state changes explicitly, building on modularity concepts for scalable design.[56]Object-Oriented Extension
Object-oriented programming (OOP) extends imperative programming by introducing classes as mechanisms to encapsulate data and associated methods, treating objects as self-contained units that manage state through imperative operations. In this paradigm, a class defines both the structure for data attributes—often mutable variables that hold the object's state—and the imperative procedures (methods) that manipulate this state, allowing for localized control over modifications while maintaining the overall program's sequential execution flow.[57][58] The imperative foundation remains evident in OOP through features like mutable objects, where instance variables can be altered during program execution, and the use of traditional control structures such as loops and conditionals embedded within methods to direct state changes. For instance, methods often employ while loops or if-else statements to iteratively update object attributes based on conditions, preserving the step-by-step command sequence characteristic of imperative programming while organizing these commands around object instances. This integration ensures that OOP does not abandon imperative principles but enhances them with structured state management.[59][60] A prominent example of this extension is C++, developed in the 1980s by Bjarne Stroustrup as an evolution of the imperative language C, incorporating OOP features like classes and inheritance to support abstract data types without sacrificing C's low-level control and efficiency. Initially released in 1985, C++ built directly on C's procedural imperative style, adding object-oriented constructs to enable better modeling of complex systems through encapsulated entities.[61] This blend combines imperative control flows—such as explicit sequencing of statements and direct memory manipulation—with OOP abstractions like polymorphism and encapsulation, facilitating modular code that scales for large software systems while retaining the predictability of imperative execution. Emerging in the late 1970s and 1980s as a shift from pure procedural approaches, this hybrid paradigm has influenced numerous languages by prioritizing both detailed state manipulation and high-level organization.[61][62]Language Examples
Fortran
Fortran, formally known as FORmula TRANslation, emerged in 1957 as the first widely adopted high-level programming language, developed by John Backus and his team at IBM for the IBM 704 computer to facilitate numerical computations in scientific and engineering applications.[28][63] This development addressed the inefficiencies of assembly language programming, enabling more direct expression of mathematical algorithms through imperative constructs that modify program state sequentially.[64] The language's fixed-format syntax, a hallmark of its early versions, organizes code into 72-character punch-card lines: columns 1 through 5 reserve space for statement labels (often used for branching), column 6 indicates continuations with a non-blank character, and columns 7 through 72 contain the executable code. Variable assignments follow an imperative model, using the equals sign to update state, as inRESULT = X * Y + Z, where variables are implicitly typed based on their names (e.g., those starting with I through N are integers).[65] Control flow relies on DO loops for repetition, structured as DO label index = start, end to iterate over a range, terminating with a labeled CONTINUE statement, and arithmetic IF statements for branching, written as IF (expression) label1, label2, label3 to direct execution based on whether the result is negative, zero, or positive.[65]
A representative example of Fortran's imperative style is a program that initializes an array of the first 10 positive integers and computes the sum of those exceeding 5, demonstrating state modification via loops and conditionals:
PROGRAM ARRAYSUM
[INTEGER](/page/Integer) ARRAY(10), I, SUM
SUM = 0
DO 10 I = 1, 10
ARRAY(I) = I
10 CONTINUE
DO 20 I = 1, 10
IF (ARRAY(I) .GT. 5) SUM = SUM + ARRAY(I)
20 CONTINUE
WRITE (6, 30) SUM
30 FORMAT (' Sum is ', I3)
END
PROGRAM ARRAYSUM
[INTEGER](/page/Integer) ARRAY(10), I, SUM
SUM = 0
DO 10 I = 1, 10
ARRAY(I) = I
10 CONTINUE
DO 20 I = 1, 10
IF (ARRAY(I) .GT. 5) SUM = SUM + ARRAY(I)
20 CONTINUE
WRITE (6, 30) SUM
30 FORMAT (' Sum is ', I3)
END
C
C exemplifies imperative programming through its emphasis on explicit control over program state and execution flow, particularly via low-level memory manipulation and sequential instructions. Developed in the early 1970s at Bell Labs for Unix systems programming, C provides direct access to hardware resources, making it a foundational language for operating systems and embedded software. Its syntax prioritizes mutable state, where programmers issue commands to modify variables and memory step by step. Key syntax elements in C underscore its imperative nature. Pointers enable direct memory addressing and manipulation, allowing programs to reference and alter data locations explicitly, as in*ptr = value to dereference and assign.[67] Arrays provide contiguous blocks of memory for storing collections, accessed imperatively via indices like array[i] = [data](/page/Data), facilitating iterative modifications. Control structures such as while loops enforce sequential execution based on conditions, exemplified by while (condition) { imperative statements; }, which repeatedly mutates state until the condition fails. Function calls support modularity by encapsulating imperative sequences, invoked as func(arg), where arguments are passed by value or pointer to enable state changes across scopes.[67]
C's use cases highlight its imperative strengths in systems programming and manual memory management. It is widely employed for developing operating systems, device drivers, and performance-critical applications due to its ability to interface directly with hardware and manage resources efficiently. Memory allocation via malloc dynamically requests heap space at runtime, returning a pointer to a block of specified bytes, while free deallocates it to prevent leaks, requiring programmers to imperatively track and release resources.[67] This explicit control suits low-level tasks but demands careful state management to avoid errors like dangling pointers.
A representative example of imperative programming in C is the implementation of a singly linked list, where nodes are dynamically allocated and linked through pointer mutations. The following code demonstrates insertion and traversal, mutating the list state imperatively:
#include <stdio.h>
#include <stdlib.h>
struct Node {
int data;
struct Node* next;
};
struct Node* head = NULL;
void insert(int value) {
struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = value;
newNode->next = head;
head = newNode; // Mutate head pointer
}
void printList() {
struct Node* temp = head;
while (temp != NULL) { // Imperative loop with state traversal
printf("%d ", temp->data);
temp = temp->next; // Mutate traversal pointer
}
printf("\n");
}
int main() {
insert(3);
insert(2);
insert(1);
printList(); // Outputs: 1 2 3
return 0;
}
#include <stdio.h>
#include <stdlib.h>
struct Node {
int data;
struct Node* next;
};
struct Node* head = NULL;
void insert(int value) {
struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = value;
newNode->next = head;
head = newNode; // Mutate head pointer
}
void printList() {
struct Node* temp = head;
while (temp != NULL) { // Imperative loop with state traversal
printf("%d ", temp->data);
temp = temp->next; // Mutate traversal pointer
}
printf("\n");
}
int main() {
insert(3);
insert(2);
insert(1);
printList(); // Outputs: 1 2 3
return 0;
}
malloc, links them by updating pointers, and traverses via a while loop, embodying imperative state changes. Freeing memory (e.g., via a separate traversal with free) would complete the example but is omitted for brevity.
C's portability was enhanced by the ANSI X3.159-1989 standard, which formalized its imperative constructs to ensure consistent behavior across diverse computing systems. Ratified in 1989, this standard defined syntax and semantics for elements like pointers, arrays, loops, and functions, promoting reliable code execution without platform-specific adaptations.[67] By codifying existing practices, ANSI C facilitated widespread adoption in imperative systems development.
Python
Python is a high-level, interpreted programming language designed with an imperative core, first released on February 20, 1991, by Guido van Rossum at Centrum Wiskunde & Informatica in the Netherlands.[68] As a multi-paradigm language, it primarily employs imperative programming through sequential execution and explicit state management, while optionally incorporating object-oriented and functional elements to enhance flexibility without altering its foundational imperative approach.[69] This design emphasizes readability and simplicity, using indentation for code blocks rather than braces or keywords, which aligns with imperative principles of direct control over program flow and data mutation.[70] Key imperative constructs in Python include for and while loops for repetitive tasks, if-elif-else statements for conditional branching, and def for defining functions that operate on mutable data structures such as lists and dictionaries. These elements allow programmers to explicitly manage program state, for instance by appending to a list within a loop or updating dictionary values based on conditions, embodying the step-by-step mutation characteristic of imperative programming.[70] Functions defined with def can encapsulate state changes, promoting modularity while maintaining the language's focus on procedural execution. A practical example of imperative programming in Python is a script for processing a text file, where state is modified through a mutable list and exceptions are handled to ensure robust file operations:def process_file(filename):
lines = [] # Mutable list to hold processed [data](/page/Data)
try:
with open(filename, 'r') as file:
for line in file: # Imperative loop to read and mutate state
if line.strip(): # Conditional check
lines.append(line.strip().upper()) # State [mutation](/page/Mutation)
except IOError as e:
print(f"Error reading file: {e}")
return None
return lines
def process_file(filename):
lines = [] # Mutable list to hold processed [data](/page/Data)
try:
with open(filename, 'r') as file:
for line in file: # Imperative loop to read and mutate state
if line.strip(): # Conditional check
lines.append(line.strip().upper()) # State [mutation](/page/Mutation)
except IOError as e:
print(f"Error reading file: {e}")
return None
return lines
