Recent from talks
Nothing was collected or created yet.
Dynamic programming language
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
A dynamic programming language is a type of programming language that allows various operations to be determined and executed at runtime. This is different from the compilation phase. Key decisions about variables, method calls, or data types are made when the program is running, unlike in static languages, where the structure and types are fixed during compilation. Dynamic languages provide flexibility. This allows developers to write more adaptable and concise code.
For instance, in a dynamic language, a variable can start as an integer. It can later be reassigned to hold a string without explicit type declarations. This feature of dynamic typing enables more fluid and less restrictive coding. Developers can focus on the logic and functionality rather than the constraints of the language.
Implementation
[edit]This section needs expansion. You can help by adding to it. (October 2009) |
Eval
[edit]Some dynamic languages offer an eval function. This function takes a string or abstract syntax tree containing code in the language and executes it. If this code stands for an expression, the resulting value is returned. Erik Meijer and Peter Drayton distinguish the runtime code generation offered by eval from the dynamic loading offered by shared libraries and warn that in many cases eval is used merely to implement higher-order functions (by passing functions as strings) or deserialization.[1]
Object runtime alteration
[edit]A type or object system can typically be modified during runtime in a dynamic language. This can mean generating new objects from a runtime definition or based on mixins of existing types or objects. This can also refer to changing the inheritance or type tree, and thus altering the way that existing types behave (especially with respect to the invocation of methods).
Type inference
[edit]As a lot of dynamic languages come with a dynamic type system, runtime inference of types based on values for internal interpretation marks a common task. As value types may change throughout interpretation, it is regularly used upon performing atomic operations.
Variable memory allocation
[edit]Static programming languages (possibly indirectly) require developers to define the size of utilized memory before compilation (unless working around with pointer logic). Consistent with object runtime alteration, dynamic languages implicitly need to (re-)allocate memory based on program individual operations.
Reflection
[edit]Reflection is common in many dynamic languages, and typically involves analysis of the types and metadata of generic or polymorphic data. It can, however, also include full evaluation and modification of a program's code as data, such as the features that Lisp provides in analyzing S-expressions.
Macros
[edit]A limited number of dynamic programming languages provide features which combine code introspection (the ability to examine classes, functions, and keywords to know what they are, what they do and what they know) and eval in a feature called macros. Most programmers today who are aware of the term macro have encountered them in C or C++, where they are a static feature which is built in a small subset of the language, and are capable only of string substitutions on the text of the program. In dynamic languages, however, they provide access to the inner workings of the compiler, and full access to the interpreter, virtual machine, or runtime, allowing the definition of language-like constructs which can optimize code or modify the syntax or grammar of the language.
Assembly, C, C++, early Java, and Fortran do not generally fit into this category.[clarification needed]
The earliest dynamic programming language is considered to be Lisp (McCarthy, 1965) which continued to influence the design of programming languages to the present day.[2]
Example code
[edit]The following examples show dynamic features using the language Common Lisp and its Common Lisp Object System (CLOS).
Computation of code at runtime and late binding
[edit]The example shows how a function can be modified at runtime from computed source code
; the source code is stored as data in a variable
CL-USER > (defparameter *best-guess-formula* '(lambda (x) (* x x 2.5)))
*BEST-GUESS-FORMULA*
; a function is created from the code and compiled at runtime, the function is available under the name best-guess
CL-USER > (compile 'best-guess *best-guess-formula*)
#<Function 15 40600152F4>
; the function can be called
CL-USER > (best-guess 10.3)
265.225
; the source code might be improved at runtime
CL-USER > (setf *best-guess-formula* `(lambda (x) ,(list 'sqrt (third *best-guess-formula*))))
(LAMBDA (X) (SQRT (* X X 2.5)))
; a new version of the function is being compiled
CL-USER > (compile 'best-guess *best-guess-formula*)
#<Function 16 406000085C>
; the next call will call the new function, a feature of late binding
CL-USER > (best-guess 10.3)
16.28573
Object runtime alteration
[edit]This example shows how an existing instance can be changed to include a new slot when its class changes and that an existing method can be replaced with a new version.
; a person class. The person has a name.
CL-USER > (defclass person () ((name :initarg :name)))
#<STANDARD-CLASS PERSON 4020081FB3>
; a custom printing method for the objects of class person
CL-USER > (defmethod print-object ((p person) stream)
(print-unreadable-object (p stream :type t)
(format stream "~a" (slot-value p 'name))))
#<STANDARD-METHOD PRINT-OBJECT NIL (PERSON T) 4020066E5B>
; one example person instance
CL-USER > (setf *person-1* (make-instance 'person :name "Eva Luator"))
#<PERSON Eva Luator>
; the class person gets a second slot. It then has the slots name and age.
CL-USER > (defclass person () ((name :initarg :name) (age :initarg :age :initform :unknown)))
#<STANDARD-CLASS PERSON 4220333E23>
; updating the method to print the object
CL-USER > (defmethod print-object ((p person) stream)
(print-unreadable-object (p stream :type t)
(format stream "~a age: ~" (slot-value p 'name) (slot-value p 'age))))
#<STANDARD-METHOD PRINT-OBJECT NIL (PERSON T) 402022ADE3>
; the existing object has now changed, it has an additional slot and a new print method
CL-USER > *person-1*
#<PERSON Eva Luator age: UNKNOWN>
; we can set the new age slot of instance
CL-USER > (setf (slot-value *person-1* 'age) 25)
25
; the object has been updated
CL-USER > *person-1*
#<PERSON Eva Luator age: 25>
Assembling of code at runtime based on the class of instances
[edit]In the next example, the class person gets a new superclass. The print method gets redefined such that it assembles several methods into the effective method. The effective method gets assembled based on the class of the argument and the at runtime available and applicable methods.
; the class person
CL-USER > (defclass person () ((name :initarg :name)))
#<STANDARD-CLASS PERSON 4220333E23>
; a person just prints its name
CL-USER > (defmethod print-object ((p person) stream)
(print-unreadable-object (p stream :type t)
(format stream "~a" (slot-value p 'name))))
#<STANDARD-METHOD PRINT-OBJECT NIL (PERSON T) 40200605AB>
; a person instance
CL-USER > (defparameter *person-1* (make-instance 'person :name "Eva Luator"))
*PERSON-1*
; displaying a person instance
CL-USER > *person-1*
#<PERSON Eva Luator>
; now redefining the print method to be extensible
; the around method creates the context for the print method and it calls the next method
CL-USER > (defmethod print-object :around ((p person) stream)
(print-unreadable-object (p stream :type t)
(call-next-method)))
#<STANDARD-METHOD PRINT-OBJECT (:AROUND) (PERSON T) 4020263743>
; the primary method prints the name
CL-USER > (defmethod print-object ((p person) stream)
(format stream "~a" (slot-value p 'name)))
#<STANDARD-METHOD PRINT-OBJECT NIL (PERSON T) 40202646BB>
; a new class id-mixin provides an id
CL-USER > (defclass id-mixin () ((id :initarg :id)))
#<STANDARD-CLASS ID-MIXIN 422034A7AB>
; the print method just prints the value of the id slot
CL-USER > (defmethod print-object :after ((object id-mixin) stream)
(format stream " ID: ~a" (slot-value object 'id)))
#<STANDARD-METHOD PRINT-OBJECT (:AFTER) (ID-MIXIN T) 4020278E33>
; now we redefine the class person to include the mixin id-mixin
CL-USER 241 > (defclass person (id-mixin) ((name :initarg :name)))
#<STANDARD-CLASS PERSON 4220333E23>
; the existing instance *person-1* now has a new slot and we set it to 42
CL-USER 242 > (setf (slot-value *person-1* 'id) 42)
42
; displaying the object again. The print-object function now has an effective method, which calls three methods: an around method, the primary method and the after method.
CL-USER 243 > *person-1*
#<PERSON Eva Luator ID: 42>
Examples
[edit]Popular dynamic programming languages include JavaScript, Python, Ruby, PHP, Lua and Perl. The following are generally considered dynamic languages:
- ActionScript
- BeanShell[3]
- C# (using reflection)
- Clojure
- CobolScript
- ColdFusion Markup Language
- Common Lisp and most other Lisps
- Dylan
- E
- Elixir
- Erlang
- Forth
- Gambas
- GDScript
- Groovy[4]
- Java (using Reflection)
- JavaScript
- Julia
- Lua
- MATLAB / Octave
- Objective-C
- ooRexx
- Perl
- PHP
- PowerShell
- Prolog
- Python
- R
- Raku
- Rebol
- Ring
- Ruby
- Smalltalk
- SuperCollider
- Tcl
- VBScript
- Wolfram Language
See also
[edit]References
[edit]- ^ Meijer, Erik and Peter Drayton (2005), Static Typing Where Possible, Dynamic Typing When Needed: The End of the Cold War Between Programming Languages (PDF), Microsoft Corporation, CiteSeerX 10.1.1.69.5966
- ^ Harper, Robert (2016). Practical Foundations for Programming languages. New York: Cambridge University Press. p. 195. ISBN 9-781107-150300.
- ^ Chapter 24. Dynamic language support. Static.springsource.org. Retrieved on 2013-07-17.
- ^ < "Groovy - Home". Archived from the original on 2014-03-02. Retrieved 2014-03-02.
Further reading
[edit]- Tratt, Laurence (2009). Dynamically Typed Languages. Advances in Computers. Vol. 77. pp. 149–184. doi:10.1016/s0065-2458(09)01205-4. ISBN 9780123748126.
External links
[edit](Many use the term "scripting languages".)
- Prechelt, Lutz (August 18, 2002). "Are Scripting Languages Any Good? A Validation of Perl, Python, Rexx, and Tcl against C, C++, and Java" (PDF). Advances in Computers. 57: 205–270. doi:10.1016/S0065-2458(03)57005-X. ISBN 9780120121571. ISSN 0065-2458. Retrieved 2020-07-27.
- Bezroukov, Nikolai (2013). "A Slightly Skeptical View on Scripting Languages". Softpanorama (2.1 ed.). Retrieved 2020-07-27.
- Wall, Larry (December 6, 2007). Programming is Hard, Let's Go Scripting... (Speech). State of the Onion 11. Perl.com. Retrieved 2020-07-27.
- Roth, Gregor (November 20, 2007). "Scripting on the Java platform". JavaWorld. Retrieved 2020-07-27.
- Ousterhout, John K. (March 1998). "Scripting: Higher-Level Programming for the 21st Century" (PDF). Computer. Vol. 31, no. 3. pp. 23–30. doi:10.1109/2.660187. ISSN 0018-9162. Archived from the original (PDF) on 2020-07-27. Retrieved 2020-07-27.
- "ActiveState Announces Focus on Dynamic Languages". ActiveState. July 26, 2004. Retrieved 2020-07-27.
- Ascher, David (July 27, 2004). "Dynamic Languages — ready for the next challenges, by design" (PDF). Whitepapers. ActiveState. Archived from the original (PDF) on 2008-11-18.
- Ascher, David (July 27, 2004). "Dynamic Languages — ready for the next challenges, by design". Whitepapers. ActiveState. Archived from the original on 2008-12-08.
Dynamic programming language
View on GrokipediaOverview
Definition and Core Concepts
A dynamic programming language is a class of high-level programming languages in which significant aspects of a program's behavior, such as type determination, variable binding, and structural properties, are resolved during execution rather than at compile time. This term should not be confused with "dynamic programming," an unrelated algorithmic method for solving optimization problems by breaking them into subproblems. This contrasts with static languages where these decisions are fixed prior to runtime, enabling greater flexibility in code structure and data handling but potentially introducing runtime errors if assumptions fail.[8] Core concepts in dynamic programming languages revolve around runtime decision-making to support adaptability. Runtime type checking, for instance, verifies data types only when operations are performed, allowing variables to hold values of varying types throughout execution without prior declaration. Duck typing further embodies this by emphasizing behavioral compatibility over explicit type declarations: an object is treated as suitable for a context if it supports the required methods or attributes at runtime, regardless of its nominal type. Just-in-time (JIT) compilation enhances this flexibility by dynamically optimizing code during execution, translating high-level instructions into machine code on-the-fly to balance interpretative ease with performance. These mechanisms collectively enable late binding, where method resolutions occur at runtime, and reflection, permitting programs to inspect and modify their own structure dynamically.[8][9][8] The term's historical origins trace to the late 1950s with the development of Lisp by John McCarthy, which introduced dynamic evaluation of symbolic expressions and pioneered features like garbage collection for managing runtime memory. Lisp's design, detailed in McCarthy's 1960 paper, established a foundation for languages where code could be treated as data and executed interpretively, distinguishing it from contemporaneous static languages like Fortran. Over subsequent decades, the approach evolved through influences like Smalltalk in the 1970s, which popularized object-oriented dynamic typing, to modern scripting languages, solidifying the distinction from static compilation models.[9][8] Dynamic programming languages often rely on interpreted execution models, where source code is read and executed line-by-line by an interpreter, facilitating immediate feedback and incremental development without separate compilation steps. This approach, inherited from early systems like Lisp, supports rapid prototyping and is particularly suited to domains requiring extensibility, such as scripting and web development. While some implementations incorporate hybrid elements like JIT for efficiency, the interpretive core underscores the emphasis on runtime adaptability over upfront rigidity.[8]Distinction from Static Programming Languages
Dynamic programming languages differ fundamentally from static ones in the timing of type resolution and binding. In static programming languages such as C++ and Java, types and bindings are resolved at compile-time, enabling early detection of type-related errors and facilitating compiler optimizations that enhance runtime performance.[4] This approach provides greater safety through compile-time checks but imposes rigidity, limiting flexibility for rapid changes or handling semi-structured and unstructured data, where approximately 90% of enterprise-generated data is unstructured (as of 2023).[10] Conversely, dynamic languages like Python and JavaScript defer these resolutions to runtime, allowing greater adaptability and easier prototyping by avoiding upfront type declarations.[4] The trade-offs between these approaches are evident in development productivity and reliability. Static typing catches many errors early, reducing debugging time in large codebases and supporting better code maintainability, though it may reject valid programs due to overly strict checks.[11] Dynamic typing, however, enables faster initial development; an empirical study comparing Java (static) and Groovy (dynamic) found that developers using the dynamic language completed most programming tasks significantly faster, though the advantage diminished for larger, more complex tasks.[12] This flexibility suits heterogeneous data handling and quick iterations but risks runtime errors that static systems prevent upfront, potentially increasing overall maintenance costs.[4] Languages exist on a spectrum between purely static and purely dynamic approaches, with hybrids offering selective dynamism. Purely dynamic languages resolve all types at runtime without compile-time enforcement, prioritizing expressiveness.[4] Hybrid approaches, such as C# introducing thedynamic keyword in version 4.0 (2010), allow developers to opt into runtime type resolution for specific scenarios like interop with dynamic objects, blending static safety with targeted flexibility while incurring runtime binding costs only where needed.[13]
Performance implications further highlight these distinctions, as dynamic languages incur overhead from runtime type checks and binding, often leading to slower execution compared to static counterparts.[4] This overhead arises from mechanisms like virtual calls and dynamic dispatch, but modern optimizations such as tracing just-in-time (JIT) compilation mitigate it; for instance, tracing JITs in languages like Racket via Pycket achieve near-native speeds for functional workloads by specializing traces at runtime.[14]
Key Features
Dynamic Typing and Type Inference
In dynamic typing, type checking for variables and expressions occurs at runtime rather than compile time, allowing the language interpreter or virtual machine to determine and enforce types based on the actual values assigned during execution.[15] This approach permits variables to hold values of different types over the course of a program's execution without explicit declarations, fostering flexibility in code structure. For instance, in Python, the built-intype() function enables runtime inspection of an object's type, returning a type object that reflects its current classification, such as int for integers or str for strings.[16] Implicit type coercion may also occur in some dynamic languages to facilitate operations between incompatible types, such as converting a string to a number during arithmetic, though this can lead to unexpected behaviors if not managed carefully.[17]
Type inference in dynamic languages extends this runtime paradigm by employing algorithms to deduce types without requiring programmer annotations, thereby optimizing performance while preserving the language's core dynamism. Algorithms like Hindley-Milner, traditionally associated with static typing, have been adapted for gradual typing systems in dynamic contexts, where type variables are inferred and instantiated dynamically during reduction to support safe polymorphism without full static analysis.[18] These techniques, often implemented in tools for dynamic languages, analyze code flows to approximate types, enabling optimizations like just-in-time compilation that avoid exhaustive runtime checks.[19]
The primary benefits of dynamic typing and inference lie in accelerating development cycles, as programmers avoid verbose type declarations and can write polymorphic functions that operate seamlessly across multiple types, enhancing code reusability and expressiveness.[15] This is particularly advantageous for prototyping and exploratory programming, where rapid iteration outweighs the need for upfront type specifications. However, challenges arise from potential runtime errors due to type mismatches, which may only surface during execution after significant computation, complicating debugging in large codebases.[4] To mitigate these issues, modern dynamic languages like Python introduce optional type hints via the typing module (introduced in Python 3.5 per PEP 484), which provide annotations for static analysis tools without imposing runtime enforcement, thus blending inference benefits with improved error detection.[20]
Late Binding and Runtime Polymorphism
In dynamic programming languages, late binding, also known as dynamic binding, refers to the runtime resolution of symbols such as variables, functions, and methods, rather than determining these associations at compile time. This contrasts with early binding in static languages, where name resolutions occur during compilation using fixed type information. In dynamic languages, lookups typically employ runtime data structures like hash tables or dictionaries to map names to their corresponding values or implementations, allowing for greater flexibility but potentially incurring performance overhead due to repeated searches.[21][22] Runtime polymorphism in these languages emerges from late binding, enabling objects to be treated interchangeably based on their behavior rather than predefined type hierarchies. A key mechanism is duck typing, where an object's compatibility with a method or interface is verified solely by the presence and behavior of required attributes at runtime, encapsulated in the principle that "if it walks like a duck and quacks like a duck, then it is a duck." This dynamic resolution supports ad-hoc polymorphism without explicit type declarations or inheritance, as method selection depends on the actual object's state during execution. Duck typing has been observed to be prevalent in languages like Smalltalk, where it facilitates cross-hierarchy method reuse.[23] Exemplary mechanisms illustrate this process. In Smalltalk, a pioneering dynamic language from the 1970s, method dispatch involves runtime traversal of the object's class hierarchy, starting from its immediate class and ascending through superclasses until the invoked method is located, embodying pure late binding for polymorphic calls. Similarly, JavaScript employs a prototype chain for property and method resolution: when accessing an attribute, the runtime first checks the object's own properties, then delegates to its prototype and subsequent prototypes in the chain until a match is found or the chain ends, enabling dynamic extension and polymorphism without static class definitions.[24] The extensibility implications of late binding are profound, as it underpins architectures where behavior can be augmented at runtime. Plugin systems, for instance, leverage dynamic method registration and dispatch to integrate new modules seamlessly, allowing applications to load and invoke extensions based on runtime availability without recompilation. This also aids in constructing domain-specific languages (DSLs), where late binding permits tailored syntax and semantics to be resolved dynamically, enhancing adaptability in specialized domains like scripting or configuration. Dynamic typing serves as a prerequisite, providing the runtime type flexibility essential for such binding mechanisms to operate effectively.[25][26]Reflection and Introspection
Reflection in dynamic programming languages enables programs to inspect and potentially modify their own structure and behavior at runtime, providing access to metadata such as class hierarchies, methods, and attributes. This capability is foundational to reflective programming, allowing developers to reason about and adapt code dynamically without compile-time knowledge. For instance, in Ruby, methods such asmethods, instance_variables, and class provide reflective information about objects, including their methods, instance variables, and class hierarchies, supporting programmatic manipulation of properties and method invocation by name.[27] Similarly, behavioral reflection extends this to altering execution flows, as explored in foundational work on reflective architectures.[28]
Introspection represents a read-only subset of reflection, focused on querying program entities without modification. It facilitates examination of live objects, such as retrieving function signatures or class attributes, which is essential for tools that analyze code structure. In Python, the inspect module exemplifies this, providing functions to retrieve information about modules, classes, methods, and tracebacks, including source code inspection and parameter details for callable objects.[29] This distinction ensures that introspection remains lightweight and safe for diagnostic purposes, contrasting with fuller reflective operations that may alter state.
Practical use cases of reflection and introspection span serialization, debugging, and framework development. In serialization, reflection inspects object graphs to convert them to byte streams or JSON, as seen in Ruby's Marshal process that uses reflection to identify serializable instance variables.[30] For debugging, Python's introspection tools enable runtime analysis of stack frames and variable types, aiding in error diagnosis. In framework development, Ruby on Rails leverages reflection through Active Record's reflection methods to examine associations and aggregations, supporting dynamic ORM behaviors like inferring relationships from model metadata.[31] These applications highlight how reflection empowers adaptive software systems, such as runtime object manipulation for extending object capabilities without subclassing.
However, reflection introduces security considerations, as it can expose internal program details and enable unauthorized modifications. In dynamic languages, unchecked reflective access may lead to information leaks or injection vulnerabilities by allowing attackers to inspect sensitive metadata or invoke privileged methods. For example, in Ruby applications, reflection in deserialization processes like Marshal has been exploited to achieve remote code execution by reconstructing malicious objects, underscoring the need for safeguards like input validation.[32] Developers must mitigate these risks through encapsulation, such as limiting reflective APIs in untrusted code or using security managers to restrict operations.
Macros and Metaprogramming
Macros in dynamic programming languages enable the generation and transformation of code at compile-time or runtime, allowing developers to extend the language's syntax and semantics in powerful ways. Pioneered in Lisp, macros treat code as data, facilitating the creation of domain-specific abstractions that integrate seamlessly with the host language. In Lisp, macros were introduced as early as 1958, leveraging the language's homoiconic nature where programs are represented as lists, enabling straightforward manipulation and expansion of symbolic expressions.[33] This feature, formalized in John McCarthy's seminal 1960 paper on recursive functions of symbolic expressions, allowed for the definition of new syntactic forms through functions likedefmacro, which expand into equivalent code during evaluation.[34]
Macros can be classified as unhygienic or hygienic based on their handling of identifier binding. Unhygienic macros, prevalent in early Lisp implementations, expand code by direct substitution, which risks unintended variable capture where macro-introduced identifiers conflict with those in the surrounding context, potentially altering program semantics.[33] Hygienic macros address this by automatically renaming identifiers to preserve lexical scoping, ensuring expansions do not interfere with user-defined variables unless explicitly intended. This approach was advanced in Scheme through the 1986 work of Kohlbecker et al., which proposed an expansion algorithm that systematically avoids capture while maintaining flexibility for deliberate bindings.[35]
Metaprogramming extends macro capabilities to broader runtime code manipulation, enabling dynamic adaptation of program behavior without predefined structures. In Ruby, the method_missing hook exemplifies this by intercepting calls to undefined methods, allowing objects to generate or delegate responses on the fly, such as interpreting method names as parameters for dynamic operations.[36] Similarly, JavaScript's Proxy object facilitates metaprogramming by intercepting fundamental operations like property access and assignment, enabling custom traps that implement validation, logging, or transformation logic.[37]
These techniques offer significant advantages, including the reduction of boilerplate code through automated generation of repetitive structures and the creation of expressive APIs tailored to specific domains. For instance, macros in Lisp have enabled concise domain-specific syntax for tasks like symbolic computation, minimizing verbosity while enhancing readability.[38] Metaprogramming features like Proxies further promote flexible, interceptable interfaces that abstract common patterns, such as data validation or event handling, without modifying underlying objects.[37]
However, macros and metaprogramming introduce limitations, particularly in code opacity and debugging challenges. The dynamic nature of expansions can obscure control flow, making it difficult to trace errors or predict behavior, as generated code may not align intuitively with source-level expectations.[38] Overuse often leads to "magic" that complicates maintenance, especially in large systems where unintended interactions arise from runtime manipulations. Reflection serves as a foundational tool for such metaprogramming by providing access to program structure, but it alone does not mitigate these debugging hurdles.[38]
Implementation Mechanisms
Eval and Runtime Code Execution
In dynamic programming languages, theeval function serves as a fundamental mechanism for interpreting and executing code represented as strings or symbolic expressions at runtime, enabling flexible scripting and interactive computation. Originating in the design of Lisp, where John McCarthy introduced eval in 1960 as part of the language's recursive evaluation of symbolic expressions, this capability allows programs to treat code as data, facilitating dynamic behavior central to the paradigm.[39] In modern dynamic languages like Python, eval evaluates a string containing a Python expression in the current execution environment, returning the result of that computation, as implemented in the language's built-in functions since its initial release in 1991.[40][41]
This runtime execution supports scenarios such as generating and running ad-hoc calculations or user-defined formulas, exemplified by Python's eval("2 + 3 * 4"), which parses the string, compiles it to bytecode, and evaluates it to yield 14 within the provided global and local namespaces.[40] However, eval poses significant security risks, as it can execute arbitrary code, potentially allowing code injection attacks if untrusted input—such as from user sources—is passed directly, leading to vulnerabilities like unauthorized file access or system compromise.[40] To mitigate these, implementations often restrict the execution context by supplying limited globals and locals dictionaries, excluding dangerous built-ins like __import__ or open.[40]
Alternatives to eval address its limitations for broader code execution; for instance, Python's exec function handles statements and code blocks (e.g., loops or assignments) that return no value, in contrast to eval's focus on expressions, making exec suitable for multi-line scripts while both share the same parsing overhead.[42] These mechanisms are integral to read-eval-print loop (REPL) environments, where eval processes user input iteratively to provide immediate feedback, as seen in interactive shells for languages like Lisp and Python.[43]
Despite their utility in interactive development and prototyping, eval and similar functions incur performance penalties due to repeated parsing and compilation of strings into executable code at runtime, often orders of magnitude slower than pre-compiled code, though optimizations like caching compiled objects can partially alleviate this in repeated evaluations.[44] This overhead underscores their role as essential yet cautious tools for enabling runtime scripting in dynamic languages, relying on late binding for symbol resolution during execution.[45]
Runtime Object and Type Manipulation
In dynamic programming languages, runtime object alteration allows developers to add or remove methods and attributes from existing objects during execution, enabling flexible modifications without recompilation. For instance, Python's built-insetattr() function assigns a value to an object attribute by name, effectively adding it if it does not exist, while delattr() removes a specified attribute from an object.[46][47] This capability supports adaptive behavior in applications where object structures evolve based on runtime conditions, such as in scripting environments or plugin systems.
Type manipulation extends this flexibility by permitting the creation of new classes or types dynamically at runtime, often leveraging prototype-based or class-based inheritance models. In JavaScript, the Object.create() method constructs a new object with a specified prototype object and optional properties descriptor, facilitating the dynamic assembly of class-like structures without predefined blueprints. Such mechanisms underpin metaprogramming techniques, where types can be generated or altered to accommodate varying data models, enhancing expressiveness in web development and interactive applications.
At the implementation level, dynamic languages typically employ hash tables to manage object slots for attributes and methods, allowing sparse and extensible storage that contrasts with the fixed virtual method tables (vtables) used in static languages for efficient dispatch. In Python, each object maintains a __dict__ attribute as a dictionary—a hash table implementation—for storing arbitrary key-value pairs representing attributes, which supports unbounded growth and deletion without fixed offsets.[48] Static languages like C++ rely on vtables, arrays of function pointers associated with class types, to enable polymorphic method calls through offset-based lookups, prioritizing compile-time optimization over runtime extensibility.[49] This hash-based approach in dynamic systems incurs a performance overhead for lookups due to hashing and collision resolution but provides the versatility essential for object and type dynamism.[50]
These mechanisms find practical application in frameworks through techniques like monkey patching, where core classes are extended at runtime to integrate new functionality seamlessly. In Ruby, monkey patching involves reopening existing classes to add or override methods, allowing runtime extensions such as customizing standard library behavior for domain-specific needs, though it requires careful scoping to avoid global side effects.[51] Reflection serves as the primary API enabling such manipulations by exposing object metadata for programmatic inspection and alteration.
Dynamic Memory Allocation
In dynamic programming languages, variables are bound to objects allocated on the heap at runtime, enabling name resolution and storage without predefined stack frames tied to static types. This approach supports flexible variable lifetimes and types determined during execution, as all data structures reside in a managed heap space. For instance, Python maintains a private heap for all objects and containers, where allocation occurs via an internal manager that handles requests from the interpreter. Similarly, in JavaScript, the V8 engine allocates most values, including primitives and objects, on the heap to facilitate dynamic behavior.[52][53] Garbage collection provides automatic reclamation of memory from unreachable objects in these languages, preventing manual deallocation errors. Python primarily employs reference counting, where each object tracks its reference count and is deallocated when it drops to zero, supplemented by a generational collector using mark-and-sweep to detect and break cycles. JavaScript's V8 engine uses a generational tracing collector, dividing the heap into young and old generations for efficient incremental collection, primarily via mark-sweep in the young generation and more comprehensive Orinoco collector for the old. These mechanisms ensure memory is freed without explicit programmer intervention, though they require periodic pauses for scanning.[52][54] Reference counting and tracing garbage collection represent key strategies with distinct trade-offs in dynamic languages. Reference counting offers immediate deallocation and low-latency updates but fails to reclaim cyclic references without additional cycle detection, potentially leading to memory leaks in complex graphs. Tracing collectors, conversely, robustly handle cycles by marking reachable objects from roots and sweeping the unmarked, but they impose throughput costs from traversal and may cause unpredictable pauses, though optimizations like generational schemes mitigate this. In practice, hybrid approaches, as in Python, combine reference counting's responsiveness with tracing's completeness for balanced performance.[55][56] Dynamic typing in these languages influences allocation patterns by treating all variables as references to heap objects, amplifying the need for efficient garbage collection. Overall, garbage collection enhances portability by abstracting hardware-specific memory details, allowing code to run consistently across architectures without low-level adjustments, albeit at the expense of runtime overhead from indirection and collection cycles.[52][57]Runtime Code Generation
Runtime code generation in dynamic programming languages involves the creation and assembly of executable code structures, such as abstract syntax trees (ASTs) or bytecode, during program execution to enable flexibility and optimization. This technique allows languages to construct code dynamically based on runtime conditions, contrasting with static compilation by deferring code assembly until necessary. For instance, in Lua, chunks—units of code stored as strings or files—are loaded and precompiled into bytecode instructions at runtime, facilitating the execution of dynamically generated scripts without prior compilation.[58] In the .NET framework, the DynamicMethod class enables the emission of intermediate language (IL) code at runtime, which is then compiled into native executables for immediate use and subsequent garbage collection. This approach supports the creation of lightweight, on-the-fly methods tailored to specific runtime needs, such as adapting to varying data types or user inputs.[59] Just-in-time (JIT) compilation represents a key form of runtime code generation, where interpreters trace execution paths and compile frequently used code segments into optimized machine code. PyPy, an implementation of Python, introduced a tracing JIT compiler in 2009 to address performance limitations in dynamic languages by automatically generating specialized code for hot execution loops.[60] This mechanism traces the meta-level of bytecode interpreters to produce efficient native code, significantly boosting speed for compute-intensive tasks.[61] Type-based code assembly further refines runtime generation by producing specialized code variants for different object classes encountered during execution. In the Self programming language, polymorphic inline caches (PICs), developed in the early 1990s, extend basic inline caching to store multiple receiver types per call site, dynamically generating and patching code to inline method lookups for common type patterns.[62] This reduces dispatch overhead in dynamically typed object-oriented systems by assembling optimized code paths based on observed type distributions. In performance-critical scenarios, runtime code generation often employs tracing of hot paths—frequently executed code sequences—to identify and compile optimization opportunities, as seen in PyPy's approach where tracers follow loop iterations to generate streamlined executables. Such techniques are essential for dynamic languages to achieve near-static performance levels without sacrificing flexibility, particularly in applications involving unpredictable workloads.[61]Practical Examples
Code Computation with Late Binding
In dynamic programming languages, code computation with late binding enables the resolution of variables, methods, and operations at runtime, allowing expressions to adapt based on the current execution context without compile-time type checks.[40][63] This mechanism supports flexible computation, as seen in the use of functions like Python'seval, which parses and executes strings as code while resolving names from the provided or current namespaces at evaluation time.[40]
A representative example in Python demonstrates recursive computation of a factorial using eval with late-bound variables. The following function defines the base case and recursively evaluates a string expression that incorporates the current value of n and calls itself:
def factorial(n):
if n <= 1:
return 1
else:
return eval(f"{n} * factorial({n-1})")
# Example usage
result = factorial(5)
print(result) # Output: 120
def factorial(n):
if n <= 1:
return 1
else:
return eval(f"{n} * factorial({n-1})")
# Example usage
result = factorial(5)
print(result) # Output: 120
eval resolves the variables n and the recursive call to factorial at runtime from the local scope, enabling the computation to proceed dynamically without explicit type annotations.[40] For factorial(5), the evaluation unfolds as 5 * factorial(4), then 5 * (4 * factorial(3)), and so on, until the base case, yielding 120 through successive runtime bindings.
In JavaScript, late binding facilitates runtime polymorphism through operators and functions that handle mixed types without predefined signatures. Consider a simple addition function relying on the + operator:
function add(a, b) {
return a + b;
}
// Example usage
console.log(add(3, 4)); // Output: 7 (numeric [addition](/page/Addition))
console.log(add("hello", " ")); // Output: "hello " ([string](/page/String) [concatenation](/page/Concatenation))
console.log(add(5, "world")); // Output: "5world" (mixed-type [coercion](/page/Coercion))
function add(a, b) {
return a + b;
}
// Example usage
console.log(add(3, 4)); // Output: 7 (numeric [addition](/page/Addition))
console.log(add("hello", " ")); // Output: "hello " ([string](/page/String) [concatenation](/page/Concatenation))
console.log(add(5, "world")); // Output: "5world" (mixed-type [coercion](/page/Coercion))
+ operator binds its behavior at call time: it performs arithmetic addition if both operands coerce to numbers, or string concatenation otherwise, demonstrating how the same code adapts polymorphically based on runtime types.[64]
Late binding in these examples allows generic algorithms to operate without type specifications, as the runtime environment resolves operations and references dynamically, promoting code reuse across diverse data.[65] In contrast, a statically typed equivalent in a language like Java would require method overloading—separate definitions such as int add(int a, int b) and String add(String a, String b)—to achieve similar flexibility, incurring more boilerplate and compile-time rigidity.[63] This highlights the gained adaptability in dynamic languages for computation-heavy tasks.
Dynamic Object Modification
Dynamic object modification in dynamic programming languages allows runtime alterations to object behavior, enhancing extensibility without recompilation. In Ruby, this is achieved by adding methods to existing classes usingdefine_method, which defines instance methods dynamically on modules or classes.[66]
Consider an example in Ruby where a method is added to the built-in String class after instantiation of objects. Initially, a string object lacks the custom method:
str = "hello"
# str.custom_reverse_upper raises NoMethodError
str = "hello"
# str.custom_reverse_upper raises NoMethodError
define_method, a new method custom_reverse_upper is defined on the String class:
class String
define_method(:custom_reverse_upper) do
reverse.upcase
end
end
class String
define_method(:custom_reverse_upper) do
reverse.upcase
end
end
str.custom_reverse_upper # Returns "OLLEH"
str.custom_reverse_upper # Returns "OLLEH"
instance_eval, and all instances, including prior ones, immediately respond to the new method due to Ruby's open class system.[66]
In Python, monkey patching illustrates similar runtime changes by reassigning methods on classes like list. Before patching, a list uses the standard append:
my_list = [1, 2]
my_list.append(3)
print(my_list) # Outputs: [1, 2, 3]
my_list = [1, 2]
my_list.append(3)
print(my_list) # Outputs: [1, 2, 3]
def custom_append(self, item):
self.insert(0, item * 2) # Prepends doubled item instead
list.append = custom_append
def custom_append(self, item):
self.insert(0, item * 2) # Prepends doubled item instead
list.append = custom_append
my_list.append(3)
print(my_list) # Outputs: [6, 1, 2, 3]
my_list.append(3)
print(my_list) # Outputs: [6, 1, 2, 3]
unittest.mock.patch or pytest's monkeypatch.setattr temporarily patches methods like list.append to simulate behaviors or avoid side effects, ensuring tests remain focused and reversible.[67][68]
Class-Based Runtime Code Assembly
In dynamic programming languages, class-based runtime code assembly involves inspecting an object's class or type at runtime to generate and integrate specialized code paths, such as methods or functions, tailored to that class's structure. This technique leverages introspection to avoid monolithic implementations, enabling more efficient and context-specific behavior.[37] A representative example in JavaScript uses theProxy object in conjunction with Reflect to dynamically assemble method behavior for instances of a class. Consider a base Calculator class with basic properties; a Proxy can intercept property access via its get trap, inspecting the target's class and properties to assemble and bind specialized methods on-the-fly. For instance:
class Calculator {
constructor(value = 0) {
this.value = value;
}
add(x) {
this.value += x;
return this.value;
}
}
const handler = {
get(target, prop) {
if (prop === 'multiply' && target.constructor.name === 'Calculator') {
// Assemble dynamic method based on class inspection
const originalValue = target.value;
return function(y) {
target.value = originalValue * y; // Tailored [multiplication](/page/Multiplication) path
return Reflect.get(target, 'value');
}.bind(target);
}
return Reflect.get(target, prop);
}
};
const calcProxy = new Proxy(new [Calculator](/page/Calculator)(10), handler);
console.log(calcProxy.multiply(5)); // Outputs 50, assembling method at runtime
class Calculator {
constructor(value = 0) {
this.value = value;
}
add(x) {
this.value += x;
return this.value;
}
}
const handler = {
get(target, prop) {
if (prop === 'multiply' && target.constructor.name === 'Calculator') {
// Assemble dynamic method based on class inspection
const originalValue = target.value;
return function(y) {
target.value = originalValue * y; // Tailored [multiplication](/page/Multiplication) path
return Reflect.get(target, 'value');
}.bind(target);
}
return Reflect.get(target, prop);
}
};
const calcProxy = new Proxy(new [Calculator](/page/Calculator)(10), handler);
console.log(calcProxy.multiply(5)); // Outputs 50, assembling method at runtime
Proxy inspects the class name and assembles a multiply method only for Calculator instances, using Reflect.get to forward other operations and maintain default semantics.[37][69]
In Common Lisp, runtime code assembly based on type can be illustrated using the Common Lisp Object System (CLOS) to generate and evaluate functions dynamically. For example, a function can inspect an object's class with class-of and use compile to assemble a specialized lambda expression evaluated at runtime:
(defun assemble-processor (obj)
(let ((obj-class (class-of obj)))
(if (subtypep obj-class 'number)
(compile nil `(lambda (x) (+ ,obj x))) ; Tailored addition for numeric types
(compile nil `(lambda (x) ([list](/page/List) ,obj x)))))) ; Generic list for other types
(defparameter my-num 42) ; a number object
(funcall (assemble-processor my-num) 8) ; Outputs 50, using assembled numeric path
(defun assemble-processor (obj)
(let ((obj-class (class-of obj)))
(if (subtypep obj-class 'number)
(compile nil `(lambda (x) (+ ,obj x))) ; Tailored addition for numeric types
(compile nil `(lambda (x) ([list](/page/List) ,obj x)))))) ; Generic list for other types
(defparameter my-num 42) ; a number object
(funcall (assemble-processor my-num) 8) ; Outputs 50, using assembled numeric path
constructor.name or Lisp's class-of—to determine the object's type hierarchy. This inspection informs the generation of code snippets, such as conditional traps in proxies or quoted forms in Lisps, which are then assembled into executable units (e.g., bound functions or compiled lambdas) and integrated into the object's behavior. Tailored code paths emerge from this, where generic operations are replaced by class-specific implementations, ensuring that only relevant logic executes.[37][70]
Such assembly provides performance benefits by avoiding generic slow paths in dynamic language runtimes; type specialization replaces broad, type-checking code with optimized, known-type variants.[71]
