Recent from talks
Contribute something
Nothing was collected or created yet.
Programming tool
View on Wikipedia| Part of a series on |
| Software development |
|---|
A programming tool or software development tool is a computer program that is used to develop another computer program, usually by helping the developer manage computer files. For example, a programmer may use a tool called a source code editor to edit source code files, and then a compiler to convert the source code into machine code files. They may also use build tools that automatically package executable program and data files into shareable packages or install kits.
A set of tools that are run one after another, with each tool feeding its output to the next one, is called a toolchain. An integrated development environment (IDE) integrates the function of several tools into a single program. Usually, an IDE provides a source code editor as well as other built-in or plug-in tools that help with compiling, debugging, and testing.
Whether a program is considered a development tool can be subjective. Some programs, such as the GNU compiler collection, are used exclusively for software development while others, such as Notepad, are not meant specifically for development but are nevertheless often used for programming.
Categories
[edit]Notable categories of development tools:
- Assembler – Converts assembly language into machine code
- Bug tracking system – Software application that records software bugs
- Build automation – Building software via an unattended fashion
- Code review software – Activity where one or more people check a program's code
- Compiler – Software that translates code from one programming language to another
- Compiler-compiler – Program that generates parsers or compilers, a.k.a. parser generator
- Debugger – Computer program used to test and debug other programs
- Decompiler – Program translating executable to source code
- Disassembler – Computer program to translate machine language into assembly language
- Documentation generator – Automation technology for creating software documentation
- Graphical user interface builder – Software development tool
- Linker – Program that combines intermediate build files into an executable file
- Memory debugger – Software memory problem finder
- Minifier – Removal of unnecessary characters in code without changing its functionality
- Pretty-printer – Formatting to make code or markup easier to read
- Performance profiler – Measuring the time or resources used by a section of a computer program
- Static code analyzer – Analysis of computer programs without executing them
- Source code editor – Text editor specializing in software code
- Source code generation – Type of computer programming
- Version control system – Stores and tracks versions of files
See also
[edit]- Call graph – Structure in computing
- Comparison of integrated development environments – Notable software packages that are nominal IDE
- Computer aided software engineering – Domain of software tools
- Git – Distributed version control software system
- GitHub – Software development collaboration platform
- Lint – Tool to flag poor computer code
- List of software engineering topics – Overview of and topical guide to software engineering
- List of unit testing frameworks
- Manual memory management – Computer memory management methodology
- Memory leak – When a computer program fails to release unnecessary memory
- Reverse-engineering – Process of extracting design information from anything artificial
- Revision Control System – Version-control system
- Software development kit – Set of software development tools
- Software engineering – Engineering approach to software development
- SourceForge – Software discovery and hosting platform for B2B and open source software
- SWIG – Open-source programming tool
- Toolkits for User Innovation – Design method
- Valgrind – Programming tool for profiling, memory debugging and memory leak detection
References
[edit]This article includes a list of general references, but it lacks sufficient corresponding inline citations. (August 2010) |
- Software Development Tools for Petascale Computing Workshop 2007
- Kernighan, Brian W.; Plauger, P. J. (1976), Software Tools, Addison-Wesley, pp. 352, ISBN 0-201-03669-X
External links
[edit]
Media related to Programming tools at Wikimedia Commons
Programming tool
View on GrokipediaDefinition and Overview
Core Definition
A programming tool is a computer program designed to assist developers in creating, debugging, maintaining, or executing other software.[5][6] These tools span the software development lifecycle, providing essential support for tasks such as code writing, error detection, performance optimization, and deployment.[7] In embedded systems and hardware-oriented development, programming tools may include physical devices like in-circuit debuggers and programmers that interface directly with target hardware.[8] Key characteristics of programming tools include their ability to automate repetitive tasks, such as code compilation or syntax checking, thereby enhancing programmer efficiency and reducing manual effort.[7] They also promote code quality by enforcing standards, identifying potential issues early, and facilitating collaboration among development teams.[6] By abstracting complex operations through features like integrated libraries or debugging interfaces, these tools enable developers to focus on problem-solving rather than low-level implementation details.[7] Basic examples of programming tools include text editors tailored for coding, such as Vim or Notepad++, which allow developers to write and modify source code in plain text format.[5] Unlike end-user applications like word processors, which apply formatting and visual enhancements for document creation, coding text editors prioritize syntax highlighting, auto-completion, and plain-text preservation to ensure compatibility with compilers and interpreters.[5] Programming tools often form toolchains, which are sequences of interconnected tools that handle end-to-end development processes, such as from code editing through compilation to deployment.[9] For instance, a typical toolchain might integrate a text editor, compiler, and linker to transform source code into executable binaries, streamlining workflows and minimizing errors across stages.[9]Scope and Distinctions
The scope of programming tools encompasses software applications and, to a lesser extent, specialized hardware designed to facilitate the software development lifecycle, targeting the needs of programmers in tasks such as code creation, analysis, testing, and maintenance. These tools include features like syntax highlighters, which color-code code elements to improve readability. According to the Guide to the Software Engineering Body of Knowledge (SWEBOK), such tools are integral to development environments, supporting processes that reduce manual effort and improve efficiency across engineering activities.[10] Programming tools exclude general-purpose software that is not tailored for coding workflows, such as standard word processors, which prioritize formatted text layout over structured code handling unless explicitly adapted with plugins or extensions for syntax support and error checking. Similarly, everyday hardware like conventional keyboards falls outside this scope, as it lacks customization for development tasks; however, specialized hardware, such as in-circuit emulators, qualifies for domain-specific use in embedded development. This distinction emphasizes tools' focus on domain-specific enhancements rather than ubiquitous utilities.[10] A key distinction exists between programming tools and libraries or frameworks: the former enable and automate development processes, acting as facilitators for creation and management, while the latter provide reusable code components or structural scaffolds that integrate into the programmer's output. SWEBOK clarifies that tools operate as standalone or integrated utilities for tasks like compilation or debugging, inverting control to support the engineer, whereas libraries are invoked by code and frameworks dictate application architecture. This separation ensures tools address workflow efficiency without embedding as runtime elements.[10] Programming tools are fundamentally classified by functionality to delineate their roles in the development pipeline, such as editing tools for code composition versus analysis tools for error detection and optimization. This high-level categorization, as outlined in SWEBOK, groups tools into categories like development environments for authoring, testing suites for validation, and configuration management systems for versioning, allowing programmers to select based on lifecycle phase needs.[10]History
Early Development (1940s-1970s)
The early development of programming tools in the 1940s and 1950s was dominated by the limitations of nascent electronic computers, which relied heavily on punch-card systems for program input and execution in some cases. Originating from Herman Hollerith's tabulating machines in the late 19th century, punch cards became a common medium for data input and output, as seen on machines like the UNIVAC I (1951), where programmers could prepare decks of cards that were fed into card readers for batch processing.[11] The ENIAC (1945), however, was programmed primarily through manual wiring of plugboards and setting switches for instructions, with punch cards used mainly for data I/O. These systems represented the first rudimentary programming tools, as they facilitated the organization and input of machine code but required manual wiring or direct binary entry, making programming labor-intensive and error-prone. Early assemblers emerged as a key precursor to abstraction, with David Wheeler creating the world's first assembler in 1949 for the EDSAC computer at the University of Cambridge; this tool translated mnemonic symbols into machine instructions, reducing the cognitive load on programmers compared to pure binary coding.[12] A pivotal innovation during this period was Grace Hopper's pioneering work on compilers, which automated the translation of higher-level instructions into executable code. Between 1951 and 1952, Hopper and her team at Remington Rand developed the A-0 system—the first compiler—for the UNIVAC I, functioning initially as a rudimentary linker and loader that processed subroutines from a library into machine code.[13] This effort built on Hopper's earlier contributions to subroutine libraries and marked compilers as essential automation tools, shifting programming from direct hardware manipulation toward symbolic representation. By the mid-1950s, such advancements were complemented by the growing use of magnetic tapes for storage, though punch cards remained prevalent until the late 1960s. The 1960s brought widespread adoption of high-level language compilers, enabling more efficient and readable code for diverse applications. IBM's FORTRAN, released in 1957 for the IBM 704 mainframe, was the first commercially successful compiler for a high-level language, optimizing scientific computations by translating mathematical formulas into efficient machine code and achieving near hand-optimized performance.[14] Following closely, COBOL emerged in 1959 through the efforts of the CODASYL Short-Range Committee, convened by the U.S. Department of Defense to standardize business-oriented programming; its English-like syntax aimed to bridge the gap between programmers and domain experts in data processing.[15] Concurrently, basic debugging facilities were introduced in advanced operating systems, such as IBM's OS/360 launched in 1964, which included console-based tracing, memory inspection, and dump utilities to aid in identifying runtime errors in batch and multiprogramming environments.[16] These tools represented a foundational step in systematic error detection, though they were limited to post-execution analysis without interactive breakpoints. In the 1970s, programming tools advanced toward interactivity and modularity, driven by time-sharing systems and emerging networks. Text-based editors gained prominence, with TECO (Text Editor and COrrector), originally developed in 1962 by Dan Murphy at MIT, evolving into a programmable editor widely used on DEC PDP systems for manipulating source code through macro commands. Precursors to modern editors like EMACS appeared mid-decade, including the 1976 TMACS macro package for TECO on PDP-10 machines, which standardized editing functions and introduced extensible scripting for customized workflows.[17] Linkers also matured to support modular code assembly, as seen in the ld utility integrated into early Unix versions from Bell Labs in 1971, which resolved symbols across separately compiled object files to produce linked executables, promoting reusable code components in multi-file projects. The ARPANET, operational since 1969, began influencing collaborative tools by providing protocols for remote file transfer (FTP, 1971) and email, allowing distributed teams to share source code and debug collaboratively across institutions. Throughout this era, programming tools were constrained by the absence of graphical user interfaces, relying instead on line-oriented terminals, punch cards, and tape drives, which enforced sequential batch processing and limited real-time interaction. These limitations underscored the need for more integrated environments, setting the stage for later innovations while highlighting the ingenuity of early developers in overcoming hardware constraints.Modern Evolution (1980s-Present)
The advent of personal computers in the 1980s transformed programming tools from command-line utilities into more integrated and user-friendly systems, enabling broader accessibility for developers. A pivotal example was Turbo Pascal, released by Borland International in November 1983, which introduced an integrated development environment (IDE) that combined code editing, compilation, and basic debugging within a single interface, significantly speeding up the development cycle for Pascal programmers on IBM PC compatibles.[18] Concurrently, environments like Smalltalk-80, released in 1980 by Xerox PARC, advanced graphical debugging capabilities, allowing developers to inspect and modify running programs visually through object-oriented interfaces, laying groundwork for modern interactive tools.[19] The 1990s and 2000s marked a shift toward collaborative and scalable tools, driven by the expansion of networked computing and open-source movements. Version control systems evolved from basic file tracking to distributed models; the Concurrent Versions System (CVS), designed and coded by Brian Berliner in April 1989, enabled multiple developers to manage shared code repositories over networks, becoming a standard for team-based projects.[20] This progressed with Git, created by Linus Torvalds in April 2005 to support Linux kernel development, offering decentralized branching and merging that revolutionized collaborative workflows. Build automation tools like Make, originally developed in 1976, achieved widespread adoption during this era alongside Unix and open-source ecosystems, automating compilation processes for increasingly complex software builds. In the 2010s, the rise of cloud computing and mobile platforms further integrated programming tools into web-based and cross-platform paradigms, reducing setup barriers and enhancing portability. Cloud IDEs emerged as key innovations, with Cloud9 founded in 2010 to provide browser-based environments for collaborative coding without local installations, later integrated into AWS services.[21] Tools for mobile and cross-platform development proliferated, exemplified by React Native's launch in 2015, which allowed single-codebase apps for iOS and Android, and Flutter's introduction in 2017, streamlining UI development across devices with Google's backing.[22] The 2020s have seen programming tools deeply embed into DevOps pipelines, emphasizing automation and continuous integration to support agile, scalable software delivery. Integration with platforms like Jenkins for CI/CD workflows has become standard, enabling seamless transitions from code commit to deployment in cloud-native environments.[23] Post-COVID adaptations have accelerated remote-friendly features, such as enhanced collaboration in tools like GitHub Codespaces, allowing distributed teams to code, review, and debug in real-time without physical proximity, as evidenced by studies on software engineering practices during enforced work-from-home periods.[24]Types of Programming Tools
Editors and Integrated Development Environments (IDEs)
Text editors form the foundational layer of programming tools for code authoring, providing a lightweight interface to create, view, and modify source code files across various formats. Early text editors, such as the line-based ed included in Version 7 Unix in 1979, offered basic commands for inserting, deleting, and navigating text without graphical elements. Modern text editors have evolved to include advanced features like syntax highlighting, which applies color and styling to code elements based on programming language rules to enhance readability and reduce errors during review. Auto-completion, another key capability, uses context-aware suggestions to predict and insert code snippets, variables, or functions as developers type, thereby streamlining the writing process. Vim, first released on November 2, 1991, by Bram Moolenaar as an improved version of the vi editor, exemplifies a highly efficient, modal text editor that supports syntax highlighting, auto-completion via plugins, and extensive customization through scripting. Notepad++, launched on November 24, 2003, by Don Ho, is a free, open-source editor for Windows that emphasizes plugin extensibility and supports syntax highlighting for over 80 programming languages, making it popular for quick scripting and configuration tasks. Integrated Development Environments (IDEs) build upon text editors by combining code editing with additional functionalities like build automation, version integration, and graphical debugging interfaces within a unified workspace, which minimizes the need to switch between disparate tools. This all-in-one approach facilitates faster iteration cycles and better project organization, particularly for complex applications. Visual Studio, Microsoft's flagship IDE first announced on January 28, 1997, integrates an advanced editor with compilers for languages like C# and C++, along with tools for UI design and deployment, supporting enterprise-scale development. Eclipse, developed by IBM and released as open-source by the Eclipse Foundation in November 2001, offers a plugin-based architecture that allows customization for multiple languages, with strong emphasis on Java projects through its extensible platform. Language-specific IDEs further tailor these environments to platform ecosystems; for instance, Xcode, introduced by Apple on June 23, 2003, provides an editor optimized for Swift and Objective-C, incorporating simulators for iOS and macOS testing to enable rapid prototyping of Apple-native applications. The primary advantages of IDEs include real-time error checking via integrated linters that flag syntax issues and potential bugs as code is entered, and robust project management features that automate dependency resolution and configuration across large codebases, ultimately boosting developer productivity through reduced manual overhead. The evolution of editors and IDEs traces from rudimentary line editors of the mid-20th century to contemporary AI-augmented systems that incorporate machine learning for intelligent assistance. Recent innovations, such as the integration of GitHub Copilot—launched by GitHub on June 29, 2021, as an AI-powered code suggestion tool—embed natural language processing into editors like Visual Studio Code to generate entire functions or debug suggestions based on contextual prompts, marking a shift toward collaborative human-AI development workflows.Compilers, Interpreters, and Assemblers
Compilers are programming tools that translate source code written in high-level languages into machine code or intermediate representations suitable for execution on a target platform. The compilation process typically involves several phases: lexical analysis, where the source code is scanned to identify tokens such as keywords and identifiers; syntax analysis or parsing, which checks the structure against the language's grammar to build a parse tree; semantic analysis to verify meaning and type compatibility; intermediate code generation to produce a platform-independent form; optimization to improve efficiency; and final code generation to output executable code.[25] For example, the GNU Compiler Collection (GCC), first released in 1987 by Richard Stallman as part of the GNU Project, exemplifies a widely used compiler that supports multiple languages including C and C++ through these phases.[26] Just-in-time (JIT) compilers represent a variant that performs compilation during program execution, dynamically translating bytecode or intermediate code into machine code for hot paths to balance startup time and runtime performance, as seen in Java Virtual Machine implementations.[27] Interpreters, in contrast, execute source code directly without producing a standalone executable, processing it line by line or statement by statement during runtime. The Python interpreter, specifically CPython, compiles Python scripts into bytecode and then interprets that bytecode sequentially, enabling immediate feedback but introducing overhead from repeated translation.[28] This approach trades execution speed—often slower than compiled code due to per-instruction interpretation—for enhanced portability, as the same source code can run across platforms with a compatible interpreter installed, without needing recompilation for each architecture.[29] Assemblers serve as low-level translators that convert assembly language instructions—mnemonic representations of machine code—into binary machine code executable by the processor. Tools like the Netwide Assembler (NASM) process assembly source files to generate object files, handling directives for sections, symbols, and relocations specific to architectures such as x86-64.[30] Key optimization techniques in compilers include dead code elimination, which removes unreachable or unused code segments to reduce program size and improve execution efficiency without altering observable behavior.[31] Cross-compilation extends compiler utility by allowing code generation for multiple target platforms from a single host machine, facilitating development for embedded systems or diverse hardware without direct access to each environment.Debuggers and Profilers
Debuggers are essential programming tools that enable developers to identify and resolve errors in code by allowing interactive inspection and control of program execution. These tools facilitate setting breakpoints to pause execution at specific points, stepping through code line by line, and examining variables and memory states in real time. For instance, the GNU Debugger (GDB), developed as part of the GNU Project in 1986, supports these features across multiple languages and platforms, including native, remote, and simulated environments.[32][33] A key distinction in debuggers is between source-level and machine-level types. Source-level debuggers operate at the level of the original source code, enabling breakpoints on statements, conditional pausing, and stepping through source lines while displaying variable values in a human-readable format.[34] In contrast, machine-level debuggers work with assembly instructions or binary code, which is useful for low-level analysis but requires knowledge of hardware-specific details.[35] An example of a source-level debugger is pdb, Python's built-in interactive debugger, which allows setting conditional breakpoints, single-stepping into functions, and evaluating expressions to inspect variables during execution.[36] Remote debugging extends these capabilities to distributed systems, where a local debugger connects over a network to control and inspect code running on remote machines, such as servers or embedded devices. This technique is particularly valuable for diagnosing issues in production environments without halting the system. Common debugging techniques include call stack tracing, which visualizes the sequence of function calls leading to the current execution point, helping trace the path of errors or exceptions.[37] Heap analysis complements this by examining dynamic memory allocation, identifying issues like corruption or invalid accesses through tracing allocations and deallocations.[38] Profilers, on the other hand, focus on analyzing program performance to pinpoint bottlenecks in runtime execution, memory usage, and resource consumption rather than fixing logical errors. These tools instrument code to collect metrics on execution time, function calls, and memory allocations without significantly altering behavior. Valgrind, an open-source suite for Linux and similar systems, exemplifies this through its heap profilers like Massif, which measure heap memory usage over time, including allocations and leaks, to reveal patterns of excessive consumption or inefficiencies.[39][40] Profilers often integrate call-graph generation to attribute runtime costs to specific functions or code paths, aiding in the identification of hotspots. For memory-specific profiling, tools like Valgrind's Memcheck detect leaks by tracking un-freed allocations and invalid reads/writes, providing detailed reports on their origins via stack traces.[38] Such analysis helps optimize code by focusing on high-impact areas, such as functions with disproportionate CPU or memory demands, thereby improving overall efficiency in software development.Build Automation and Linkers
Build automation encompasses tools and scripts that streamline the process of transforming source code into executable software by automating tasks such as dependency resolution, compilation, linking, and packaging.[41] These tools ensure consistent, repeatable builds while minimizing manual intervention, often supporting incremental builds that only recompile changed files to optimize efficiency.[42] A seminal example is Make, developed by Stuart Feldman at Bell Labs in April 1976, which introduced dependency tracking via Makefiles to specify build rules and prerequisites, revolutionizing software construction by automating recompilation based on file timestamps.[42] Feldman received the 2003 ACM Software System Award for this contribution, highlighting its enduring impact on build practices.[42] For language-specific ecosystems, tools like Gradle extend these concepts with domain-specific languages for more expressive configurations. Released in 2008, Gradle was designed primarily for Java projects, using a Groovy-based DSL to handle complex dependency management and multi-project builds, addressing limitations in earlier tools like Ant by enabling declarative and imperative scripting styles.[43] It supports incremental compilation and caching to accelerate builds in large-scale applications.[43] Linkers form a critical component of the build process, operating after compilation to combine multiple object files—produced by compilers from source code—into a single executable or library by resolving symbolic references and relocating addresses.[44] In static linking, the linker embeds all required library code directly into the final executable at build time, resulting in a self-contained binary that avoids runtime dependencies but increases file size.[45] Conversely, dynamic linking defers resolution to runtime, where the operating system loader maps shared libraries into memory, promoting modularity, reduced redundancy, and easier updates but introducing potential issues like version mismatches.[45] The GNU linker (ld), part of the GNU Binutils project, exemplifies a widely used implementation, supporting both linking modes through command-line options and linker scripts for custom memory layouts and symbol handling. To further automate builds within development workflows, continuous integration (CI) pipelines integrate build tools into server-based systems that trigger automated processes upon code commits. Jenkins, originally launched as Hudson in 2004 by Kohsuke Kawaguchi at Sun Microsystems, provides an open-source platform for orchestrating these pipelines via plugins, enabling scheduled or event-driven builds, artifact management, and integration with version control.[46] This setup ensures early detection of integration errors in collaborative environments.[46] Despite these advancements, build automation faces significant challenges, particularly dependency hell, where conflicting version requirements among libraries lead to resolution failures and brittle builds.[47] Cross-platform builds add complexity, as variations in operating systems, architectures, and toolchains necessitate conditional configurations to maintain compatibility, often requiring additional abstraction layers or specialized tools to avoid platform-specific pitfalls.[48]Version Control Systems
Version control systems (VCS) are essential programming tools that track changes to source code over time, allowing developers to revert modifications, collaborate effectively, and maintain project integrity. By recording every edit in a structured manner, VCS enable teams to manage code evolution, experiment with new features without disrupting the main codebase, and recover from errors efficiently. These systems form the backbone of modern software development, supporting both individual and collaborative workflows by providing a complete history of changes. Version control systems are broadly categorized into centralized and distributed models. In centralized VCS, such as Apache Subversion (SVN), released in 2000, all file versions and revision history reside on a single central server, requiring developers to connect to it for commits, updates, and access; this setup facilitates straightforward administration and access control but creates a single point of failure if the server is unavailable.[49] In contrast, distributed VCS like Git, created by Linus Torvalds in April 2005, allow every developer to maintain a full copy of the repository, including its entire history, enabling offline work, faster operations, and decentralized collaboration without reliance on a central authority. This distributed approach has become dominant due to its resilience and support for parallel development.[50] Core features of VCS include branching, which creates independent lines of development for features or fixes; merging, which integrates changes from branches back into the main codebase; commit histories, which log snapshots of the project at specific points with metadata like author and message; and conflict resolution, where tools help reconcile overlapping modifications by highlighting differences and allowing manual intervention.[50] These capabilities ensure traceability and reduce errors in team environments.[51] VCS rely on diff algorithms to compute and represent changes between file versions efficiently. A seminal example is the Myers diff algorithm, introduced in 1986, which finds the shortest sequence of edits (insertions and deletions) to transform one text into another using an O(ND) time complexity approach based on dynamic programming and longest common subsequence computation, making it suitable for large files common in software projects.[52] Platforms like GitHub, a web-based hosting service for Git repositories launched in 2008, extend these protocols by providing remote storage, visualization of diffs, and collaboration interfaces, allowing users to push local changes to shared repositories for team review.[53] Best practices for using VCS emphasize structured workflows to maximize benefits. Tagging releases involves annotating specific commits with lightweight or signed tags to mark stable versions, such asgit tag -a v1.0.0, enabling easy reference and deployment without altering the commit history. Pull requests, a GitHub-specific mechanism, facilitate code review by proposing changes from a branch to the main repository, incorporating discussions, automated tests, and approvals before merging to prevent integration issues. Adopting these practices promotes clean histories and enhances team productivity.
