Hubbry Logo
search
logo

Non-structured programming

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Non-structured programming (a.k.a. unstructured programming) is the programming paradigm that describes the state-of-the-art of programming before the structured programming paradigm was envisioned and involves the use of the goto statement for control flow such as selection (i.e. if/then/else) and iteration (i.e. while and for).

In general, the use of goto, particularly for selection and iteration, is criticized for producing unreadable, spaghetti code in the 1968 open letter Go To Statement Considered Harmful by Dutch computer scientist Edsger W. Dijkstra,[1] who coined the term structured programming.[citation needed]

Any programming language that provides goto can be used to write unstructured code. Notable languages that rely primarily if not exclusively on goto for control flow include JOSS, FOCAL, TELCOMP, any assembly language, batch file, and early versions of BASIC, Fortran, COBOL, and MUMPS.

References

[edit]

Sources

[edit]
  • Dijkstra, Edsger W. (March 1968). "Letters to the editor: Go to statement considered harmful" (PDF). Communications of the ACM. 11 (3): 147–148. doi:10.1145/362929.362947. S2CID 17469809. The unbridled use of the go to statement has as an immediate consequence that it becomes terribly hard to find a meaningful set of coordinates in which to describe the process progress. ... The go to statement as it stands is just too primitive, it is too much an invitation to make a mess of one's program.

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Non-structured programming, also known as unstructured programming, is an early programming paradigm that relies on sequential execution of statements interspersed with arbitrary jumps, primarily via mechanisms like the GOTO statement, to control program flow, often resulting in tangled and difficult-to-follow code structures commonly termed "spaghetti code."[1] This approach features a single main program body that directly manipulates global data without modular decomposition into procedures or blocks, making it suitable for small, simple scripts but prone to errors and inefficiency in larger systems.[2] Historically, non-structured programming dominated the initial decades of computing, appearing in assembly languages through conditional and unconditional jumps (such as JZ or JE instructions) and in early high-level languages like BASIC and FORTRAN, where line-numbered jumps facilitated rapid but undisciplined development.[1] Its limitations became evident as programs grew in complexity, leading to challenges in debugging, verification, and maintenance due to the lack of predictable control paths.[3] The paradigm's drawbacks were sharply articulated in Edsger W. Dijkstra's influential 1968 letter, "Go To Statement Considered Harmful," published in Communications of the ACM, which contended that the GOTO's primitiveness invites chaotic program design and undermines systematic reasoning about software correctness.[3] This critique catalyzed the structured programming movement in the late 1960s and 1970s, advocating restricted control structures like sequences, if-then-else selections, and while/do loops to enforce clarity and modularity.[1] Today, non-structured programming is largely obsolete in high-level languages, which either omit GOTO entirely or limit it to specific contexts like error handling, as unrestricted use continues to complicate code analysis and team collaboration.[4] Nonetheless, its legacy informs discussions on control flow in low-level systems programming and serves as a cautionary foundation for understanding the evolution toward more disciplined paradigms.[2]

Definition and Characteristics

Core Definition

Non-structured programming is a paradigm in which control flow—the sequence in which statements or instructions are executed—is managed primarily through unrestricted jumps, such as unconditional GOTO statements, rather than hierarchical constructs like loops or conditionals.[5] This approach results in flat, linear code structures where execution can branch arbitrarily to any point, lacking the modular organization that defines more modern styles.[5] The term "non-structured programming" emerged in the late 1960s and 1970s as a retrospective label for pre-existing practices, coined in direct contrast to the rising structured programming movement, which sought to impose disciplined control flows for better readability and verifiability. It highlights the paradigm's inherent lack of modularity and predictability in execution paths, often leading to complex, intertwined code that is challenging to maintain or prove correct. This distinction was propelled by critiques like Edsger W. Dijkstra's 1968 letter, which decried the harms of unstructured jumps in fostering incomprehensible programs.[6]

Key Features

Non-structured programming features unrestricted control flow, enabling execution to branch arbitrarily to any line of code from any location, which produces non-linear execution paths devoid of enforced sequential ordering. This allows for flexible but unpredictable program behavior, where control can transfer without regard to hierarchical or modular constraints.[7] A prominent aspect is its flat code structure, marked by the lack of nested blocks or lexical scopes for constructs such as conditionals and loops, leading to programs composed as a single continuous block without modular divisions. As a result, data management depends almost entirely on global variables accessible from everywhere, eliminating local encapsulation and increasing the risk of unintended interactions across the entire program.[8] These elements foster error-prone tendencies, including the propensity for infinite loops due to poorly managed jumps and substantial difficulties in debugging stemming from obscured control flows that intersect in complex ways. The opacity of such paths complicates tracing execution and identifying faults, often amplifying development errors in larger programs.[3] Collectively, these traits frequently culminate in spaghetti code, a term denoting source code with convoluted, tangled control structures that resist comprehension and maintenance due to excessive, undisciplined branching.[9]

Historical Development

Early Computing Era

Non-structured programming originated in the 1940s and 1950s amid the hardware limitations of early electronic computers, which lacked high-level abstractions and relied on direct manipulation of machine instructions for control flow. The ENIAC, operational from 1945, represented an initial milestone, programmed through physical reconfiguration of plugboards and switches to sequence operations across its 20 accumulators and specialized units, enabling conditional sequencing and transfers via the master programmer unit without stored programs. This manual approach enforced sequential execution punctuated by explicit jumps, constrained by the machine's limited functional tables (equivalent to about 100 words) and absence of automatic addressing mechanisms.[10] The conceptual foundation for software-based non-structured programming solidified with John von Neumann's 1945 EDVAC report, which outlined a stored-program architecture where instructions and data shared a linear memory, executed sequentially from a program counter, with branches and transfers providing the sole means to alter flow. The design specified fixed word lengths of approximately 30 binary digits for numbers and instructions, using delay-line memory with capacities on the order of thousands of words, and included conditional transfers based on comparisons (e.g., selecting between alternatives if one operand exceeded another). Absolute addressing was mandatory, as the architecture provided no relocation or indirect mechanisms beyond basic input/output registers, compelling programmers to embed all control points as explicit memory locations.[11] This paradigm manifested in early stored-program machines like the UNIVAC I, delivered in 1951, which used machine code instructions for arithmetic, data movement, and control within a 1000-word acoustic delay-line memory, each word comprising 12 decimal digits plus a sign. Program flow relied on unconditional transfers (Um instructions) and conditional jumps (Qm for equality, Tm for greater-than comparisons between registers), executed serially with average times of 525 microseconds for additions and up to 3890 microseconds for divisions, highlighting the era's performance bottlenecks. Without built-in subroutines, programmers managed returns manually via stored addresses and jumps, using direct absolute addressing for all operands due to fixed memory blocks and no indexing support.[12] The mid-1950s saw the emergence of assemblers to streamline these practices, formalizing jumps as symbolic operations amid ongoing hardware constraints. Kathleen Booth developed the first assembly language around 1950 for the Automatic Relay Calculator (ARC), introducing mnemonics for machine instructions including control transfers, which simplified encoding absolute addresses on a machine with relay-based memory limited to hundreds of words. Concurrently, the EDSAC, running programs from 1949, employed hand-assembly initially, but Maurice Wilkes, David Wheeler, and Stanley Gill's 1951 book detailed non-structured techniques, such as implementing subroutines through temporary storage of return addresses and long jumps, necessitated by the machine's 1024-word mercury delay-line memory and lack of native call instructions. These innovations, rooted in von Neumann's linear access model with explicit branches, underscored how fixed word lengths and minimal memory forced reliance on unstructured jumps for all deviation from sequential execution.[13][14]

Influence of Assembly Languages

Assembly languages, emerging prominently in the 1950s, played a pivotal role in entrenching non-structured programming paradigms by providing low-level mnemonic instructions for direct hardware control, particularly through jump opcodes that enabled arbitrary control flow alterations. On systems like the IBM 704, introduced in 1954, assembly programming utilized instructions such as TRA (Transfer and Replace Address, equivalent to an unconditional jump) and conditional variants like TZE (Transfer on Zero) and TMI (Transfer on Minus), which allowed programmers to redirect execution to any specified address based on accumulator states.[15] These mechanisms were essential in resource-constrained environments, where sequential execution was the baseline but jumps facilitated efficient branching, looping, and decision-making without higher-level abstractions, making non-structured flow the normative approach for optimizing performance on vacuum-tube based mainframes.[15] By the 1960s, assembly languages proliferated with the rise of minicomputers and mainframes, further solidifying jump-dominated control flow as a standard practice. For instance, the PDP-8 assembly language, part of Digital Equipment Corporation's influential 12-bit minicomputer family launched in 1965, relied heavily on the JMP (Jump) instruction for unconditional transfers and indirect addressing for flexible branching, while the JMS (Jump to Subroutine) provided an optional mechanism for modular calls by storing return addresses in memory.[16] Subroutines were not mandatory; programmers often favored direct jumps for fine-grained optimization in memory-limited settings, where even small overheads mattered, leading to programs composed primarily of interconnected jump sequences rather than hierarchical structures.[16] This era's assembly tools for mainframes and minicomputers emphasized efficiency over readability, with jumps enabling compact code that exploited hardware directly. The pervasive use of assembly languages trained generations of programmers in jump-heavy coding styles during the 1950s and 1960s, profoundly shaping the design of early high-level languages. Early FORTRAN (1957), developed for the IBM 704, inherited these unstructured elements to ensure generated code mirrored assembly efficiency, incorporating features like the IF-formula for conditional transfers to statement numbers and computed/assigned GOTO statements that emulated direct jumps.[17] This influence stemmed from the need to appeal to assembly-experienced scientists and engineers, allowing flexible, non-hierarchical control flow that prioritized computational speed in scientific applications over strict organization.[17] As a result, assembly's legacy embedded non-structured practices into the foundational layers of programming, influencing decades of software development until structured alternatives gained traction.

Programming Techniques

GOTO Statements

The GOTO statement functions as the cornerstone of control flow in non-structured programming, providing an unconditional transfer of execution to a designated label or line number within the program. This mechanism allows programmers to redirect the flow arbitrarily, bypassing sequential execution and enabling complex branching without reliance on hierarchical structures like loops or conditionals. For instance, in BASIC dialects, the syntax is straightforward: a line number or label precedes the keyword, as in 10 GOTO 20, which immediately shifts control to the specified line, executing it as if it were the next instruction.[18] This simplicity mirrors the underlying machine instructions but permits unrestricted jumps that can span the entire program scope. Variants of the GOTO statement extend its flexibility while preserving the unstructured paradigm. A computed GOTO, common in languages like FORTRAN, evaluates an integer or real expression to select one of several predefined labels from a list, effectively creating a jump table based on runtime values; for example, GOTO (10, 20, 30), I transfers control to the Ith label in the sequence if I equals 1, 2, or 3.[19] Conditional variants, such as IF-GOTO constructs (e.g., IF condition GOTO label in BASIC), introduce decision-making but remain unstructured by avoiding nested blocks, instead relying on flat jumps that do not enforce scoping or indentation hierarchies. These forms underscore the absence of built-in nesting, allowing jumps from any point to any other without validation of logical hierarchy. In compiler implementation, the GOTO statement maps directly to low-level assembly instructions, such as the unconditional JMP opcode in x86 architecture, which alters the processor's program counter to the target address without intermediate checks. This translation is efficient, as it leverages hardware-level branching capabilities inherent to von Neumann architectures, but compilers typically perform no static analysis on flow validity, treating labels as opaque identifiers resolved at link time. Consequently, GOTO usage introduces risks like infinite cycles—arising from circular jumps without termination conditions—and unreachable code, where sections become isolated due to overlooked paths, complicating debugging and maintenance. For example, duplicate or misaligned labels can render error-handling blocks inaccessible, as demonstrated in security vulnerabilities where validation logic was bypassed.[20][21]

Multiple Entry and Exit Points

In non-structured programming, subroutines often featured multiple entry points, allowing execution to begin at various locations within the code block rather than solely at the primary entry, which contrasted with the single-entry norm of later structured paradigms. This design enabled programmers to reuse portions of a subroutine by jumping directly into intermediate sections, typically facilitated by language features like the ENTRY statement in FORTRAN, where an alternate entry point could be defined within a subprogram to bypass initial setup code. For instance, a single subprogram might handle related but distinct operations—such as data validation followed by computation—by providing separate entry names for each, with the caller selecting the appropriate starting point based on context. Such flexibility was particularly prevalent in early high-level languages like FORTRAN from the 1960s onward, where the ENTRY statement permitted multiple entry points within a subroutine or function, allowing direct access to specific code segments without reinitializing shared variables.[22][23] Similarly, multiple exit points were common, achieved through scattered RETURN statements or unconditional jumps that allowed the subroutine to terminate at arbitrary locations depending on the execution path taken. This meant a subroutine could conclude early after a partial operation or after completing varying branches, often without a unified cleanup or return mechanism. To manage the diverse paths resulting from these multiple entries and exits, programmers frequently relied on techniques such as flags—boolean indicators set at entry to signal the intended flow—or global variables to track state across invocations, ensuring that skipped initialization code did not lead to undefined behavior. These approaches were typical in 1960s-era languages like FORTRAN and COBOL, where subroutines were optimized for performance in resource-constrained environments, allowing efficient reuse of code without full reinstantiation. However, this often fostered subtle bugs, as the shared internal state could become inconsistent if an entry point assumed prior setup that was not performed.[24] The use of multiple entry and exit points significantly increased coupling between modules, as changes to the internal logic of a subroutine could unpredictably affect multiple calling contexts due to the intertwined execution paths. For example, modifying a variable's initialization in one entry point might break assumptions in another, propagating errors across the program without clear boundaries. This tight coupling made code maintenance substantially more challenging, as tracing the impact of alterations required analyzing all possible entry-exit combinations rather than a linear flow. Guidelines for safety-critical software emphasize that multiple entry and exit points elevate code complexity, complicating analysis, testing, and debugging efforts compared to single-entry/single-exit designs.[25] Overall, while these features supported concise implementations in early computing, they contributed to the unreliability often associated with non-structured code by obscuring dependencies and control flow.

Comparison to Structured Programming

Differences in Control Flow

In non-structured programming, control flow is characterized by arbitrary transfers of execution, typically via unconditional or conditional jumps such as GOTO statements, resulting in execution paths that form a general directed graph. This graph includes nodes representing basic blocks of sequential instructions and edges denoting possible control transfers, allowing cycles, merges, and splits at arbitrary points without restrictions on entry or exit points for code segments. Such flow is often visualized and analyzed using flowcharts, where jumps can connect any diagram elements, enabling flexible but intricate path structures.[26][5] In contrast, structured programming restricts control flow to a hierarchical composition of three fundamental constructs: sequences of statements executed in order, selections via conditional branching (e.g., IF-THEN-ELSE), and iterations using loop mechanisms (e.g., WHILE or DO-WHILE). These ensure that every program block maintains a single entry point and a single exit point, promoting a tree-like or nested structure in the control flow graph where paths are predictable and composable. The Böhm–Jacopini theorem (1966) formally proves that any computable algorithm can be expressed using only these structured elements, demonstrating their sufficiency despite the prevalence of non-structured approaches in pre-1960s computing eras. A primary mechanistic difference lies in the handling of loops and repetitions: non-structured programming permits backward jumps (e.g., via GOTO to earlier labels) that can create cycles without predefined bounds or termination conditions inherent to the construct, allowing loops to emerge anywhere in the flow. Structured programming, however, encapsulates such repetitions within dedicated iteration statements that enforce clear scoping and exit criteria, eliminating arbitrary jumps to enhance analyzability and formal verification.

Pros and Cons

Non-structured programming provides notable advantages in performance-critical scenarios, particularly on early hardware with limited resources. Direct jumps, such as those enabled by GOTO statements, minimize execution overhead by avoiding the additional instructions required for structured control constructs like loops or conditionals, allowing for more efficient code on machines with slow processing and memory constraints. This approach was especially beneficial in the 1960s and 1970s, where optimizing for speed often outweighed concerns for readability in resource-scarce environments. Another advantage lies in its flexibility for low-level optimizations, including interrupt handling in systems programming. GOTO enables precise, non-hierarchical control flow adjustments, which can be essential for responding to hardware events without the rigidity of nested blocks, making it suitable for translating complex algorithms or one-off fixes in performance-sensitive applications. Despite these benefits, non-structured programming suffers from significant disadvantages, foremost among them poor readability that frequently results in "spaghetti code"—tangled control paths that obscure the program's logic and hinder comprehension.[3] This unstructured flow leads to high bug rates, as untraceable jumps increase the risk of overlooked errors and infinite loops during execution.[3] Scalability poses a further challenge, with large programs becoming notoriously difficult to maintain or extend due to the lack of modular boundaries, exacerbating the "software crisis" of the era through unpredictable modifications.[3] Edsger W. Dijkstra's 1968 critique highlighted how such practices render programs unmanageable as complexity grows, advocating for disciplined alternatives to mitigate these issues.[3]

Examples and Case Studies

Classic Examples in FORTRAN

In early FORTRAN I and II, developed in the late 1950s, control flow heavily relied on unconditional and computed GOTO statements, which allowed programmers to implement branching logic similar to assembly language jumps but in a higher-level syntax.[27] A classic use of the computed GOTO appeared in menu-driven systems or selection routines, where an integer variable determined the branch target from a list of statement labels. For instance, the syntax GO TO (30, 40, 50, 60), I transfers control to label 30 if I equals 1, to 40 if I equals 2, and so on, enabling flexible but opaque multi-way decisions without explicit conditionals.[27] This construct, introduced in FORTRAN I around 1957, facilitated rapid prototyping of interactive programs on limited hardware like the IBM 704, though it obscured the program's logical structure by scattering execution paths across labeled statements.[28] The arithmetic IF statement further exemplified non-structured branching by providing a three-way conditional jump based on the sign of an arithmetic expression. In FORTRAN I, the form IF (expression) n1, n2, n3 directs control to label n1 if the expression is negative, n2 if zero, and n3 if positive, often used for comparisons in sorting or searching algorithms.[27] A representative example from numerical routines computes the maximum value in an array:
DO 20 I = 2, N
IF (BIGA - A(I)) 10, 20, 20
10 BIGA = A(I)
20 CONTINUE
Here, the arithmetic IF checks if the current element exceeds the running maximum, jumping to label 10 for an update or falling through to 20 otherwise, integrating seamlessly with the DO loop but relying on labels for flow control.[28] While FORTRAN introduced the DO loop as a rudimentary structured construct for indexed iteration—executing from a start to end value with a fixed increment—programmers frequently bypassed its bounds using GOTO statements to achieve greater flexibility in handling exceptions or early exits, leading to tangled control flows.[27] For example, an unstructured loop might simulate iteration via a conditional check and backward jump, such as:
100 IF (I .GT. N) GO TO 200
... process A(I) ...
I = I + 1
GO TO 100
200 CONTINUE
This creates an indefinite loop without inherent termination bounds, where the backward GOTO to 100 mimics repetition but complicates debugging by dispersing entry and exit points across the code, a common pattern in 1950s-1960s scientific computing applications.[28] Such techniques, while efficient for the era's compilers, contributed to "spaghetti code" by intertwining linear execution with arbitrary jumps.[29]

Real-World Instances

One notable failure attributed to errors in non-structured FORTRAN code occurred during the 1962 launch of the Mariner 1 spacecraft, intended as NASA's first interplanetary probe to Venus. The guidance system malfunctioned due to a typographical error in the guidance equations transcribed into FORTRAN code, where an overbar was omitted from a symbol in the velocity computation term, leading to erroneous guidance signals and forcing mission abort four minutes after liftoff at a cost of approximately $18.5 million (1962 USD).[30][31] This incident highlighted how the reliance on statement numbers and implicit control flows in early FORTRAN could amplify transcription mistakes into mission-critical failures.[32] In the 1960s, NASA's Apollo guidance software exemplified heavy use of non-structured programming in production systems. The Apollo Guidance Computer (AGC) software, developed by MIT's Instrumentation Laboratory, was primarily written in assembly language featuring frequent unconditional transfers (TC instructions equivalent to GOTO) and a priority-based interrupt system to manage real-time navigation and control.[33] These jump-heavy structures, while enabling efficient use of the AGC's limited 36K-word fixed memory, contributed to significant debugging challenges, as the nonlinear control flow made tracing execution paths difficult amid concurrent interrupts and resource constraints. Despite these issues, the software successfully supported the Apollo 11 moon landing in 1969, demonstrating that non-structured approaches could achieve high reliability in mission-critical applications when rigorously tested.[34] Early operating systems, such as the Compatible Time-Sharing System (CTSS) developed at MIT in the early 1960s, relied on assembly code with extensive jumps and no structured constructs, serving as precursors to more advanced systems like Multics. CTSS's kernel implemented time-sharing through intricate branching logic in assembly, allowing multiple users to interact with an IBM 7094 but complicating maintenance due to the tangled control flows.[35] This jump-heavy design enabled pioneering features like online file editing but foreshadowed scalability issues in larger kernels.[36] Non-structured elements in 1970s banking software foreshadowed the Y2K crisis, where GOTO statements in legacy COBOL and FORTRAN code obscured date-handling logic, making error detection and remediation arduous. In financial systems processing millions of transactions daily, these unstructured flows hid two-digit year assumptions (e.g., 75 for 1975), leading to logic errors that propagated undetected until millennium rollover preparations revealed the need for extensive audits.[37] Banking institutions faced particular challenges, as modifying spaghetti-like code risked introducing new bugs in interdependent modules, prompting costly fixes estimated at billions globally.[38] During the software crisis of the 1960s and 1970s, non-structured programs often exceeded 10,000 lines and became unmaintainable due to GOTO-induced complexity, frequently necessitating complete rewrites to adopt structured paradigms. Edsger Dijkstra's 1968 critique highlighted how such "spaghetti code" in large systems led to unpredictable behaviors and escalating maintenance costs, as exemplified in military and scientific projects where debugging time dwarfed development efforts. This unmaintainability drove the shift toward structured programming, with many organizations, including IBM, rewriting core systems to improve readability and reliability.[39]

Legacy and Modern Perspectives

The Rise of Structured Programming

The rise of structured programming in the 1960s and 1970s marked a pivotal shift away from non-structured practices, driven by theoretical advancements that demonstrated the sufficiency of disciplined control flows for all computable functions. A foundational milestone was the 1966 Böhm-Jacopini theorem, which proved that any flowchart-based algorithm could be constructed using only three basic structures: sequential execution, conditional selection (if-then-else), and iteration (while loops), eliminating the need for unstructured jumps like GOTO statements.[40] This result, published in Communications of the ACM, provided a rigorous mathematical basis for arguing that non-structured elements were unnecessary and potentially detrimental to program clarity and verifiability.[40] Building on this theoretical groundwork, Edsger W. Dijkstra's 1968 letter "Go To Statement Considered Harmful," published in Communications of the ACM, ignited widespread debate by critiquing the GOTO statement as a source of "unstructured programming" that obscured logical flow and complicated error detection.[6] Dijkstra argued that reliance on arbitrary transfers of control fostered programs difficult to reason about, advocating instead for hierarchical structures that enhanced readability and maintainability.[6] His piece, which sparked widespread debate and prompted follow-up letters and discussions in subsequent issues of the journal, galvanized the software community toward disciplined alternatives.[6] Practical validation of these ideas emerged in the 1970s through experiments led by Harlan D. Mills at IBM's Federal Systems Division, where structured programming was applied to large-scale projects to address the growing "software crisis."[41] A notable case was the 1970 New York Times Indexing System, a 5,000-line PL/I program developed by a team of four under Mills' direction using top-down design and strict avoidance of GOTO, resulting in zero defects during integration testing and significant productivity gains—completed in four months. Mills' approach, detailed in his writings on program synthesis and verification, demonstrated that structured methods scaled to industrial applications.[41] This momentum influenced programming language evolution, with ALGOL 60 exemplifying early adoption by confining GOTO statements to well-defined blocks and labels, thereby limiting their disruptive potential while supporting modular design.[42] Similarly, the drive for FORTRAN reform during the decade culminated in the 1977 ANSI standard (FORTRAN 77), which retained core unstructured features like computed and assigned GOTO but initiated scrutiny of them as obsolescent, setting the stage for their deletion in subsequent revisions like Fortran 95.[43] A broader cultural transformation occurred at the 1969 NATO Conference on Software Engineering Techniques in Rome, where discussions— including Dijkstra's presentation on structured programming—stressed modularity, top-down decomposition, and elimination of jumps in favor of predictable control hierarchies to combat escalating software complexity.[44] The conference proceedings underscored these principles as essential for reliable large-system development, influencing standards bodies and industry adoption worldwide.[44]

Remaining Uses Today

Non-structured programming, characterized by unrestricted jumps like GOTO statements, persists in niche areas of contemporary software development, primarily within legacy systems maintenance and certain low-level optimizations. In the financial sector, vast COBOL-based banking applications, which handle trillions in daily transactions, frequently retain GOTO constructs from decades-old codebases, complicating ongoing maintenance and modernization efforts.[45][46] These systems, often exceeding millions of lines of code, require specialized tools to analyze and refactor unstructured control flows during updates, as seen in migration projects for Italian banking software where GOTO elimination was a key step to enable Java porting.[47] In embedded and systems programming, non-local jumps remain viable for error handling and resource cleanup in performance-critical scenarios. For instance, C's setjmp and longjmp functions enable exception-like behavior without C++ overhead, allowing efficient unwinding of stateful operations such as device initialization in constrained environments.[48] Similarly, controlled use of GOTO facilitates centralized error exits in complex functions, reducing nesting and improving readability, as evidenced by its ~100,000 instances in the Linux kernel for managing failures in subsystems like NFS.[49] Embedded C coding standards, such as those from Barr Group, permit occasional GOTO for exceptional cases if it clarifies code, though they emphasize avoidance to prevent spaghetti-like structures. However, non-structured elements are increasingly viewed as anti-patterns in modern code reviews and standards. Guidelines like MISRA C:2023 explicitly prohibit GOTO (Rule 15.1) to enhance safety in critical systems, flagging it as a code smell that obscures flow and invites bugs; deviations require justification.[50] Linters and static analyzers in tools like those enforcing MISRA routinely detect and discourage such constructs in new development, rendering them rare outside legacy contexts.[51] In the 2020s, discussions in retrocomputing communities often revisit GOTO in historical BASIC interpreters, while AI-generated code may sporadically produce unstructured outputs mimicking old patterns, though these are typically refactored for maintainability.[52] Overall, the supplanting influence of structured programming has confined non-structured practices to these diminishing roles.

References

User Avatar
No comments yet.