Hubbry Logo
search
logo

Instance (computer science)

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In computer science, an instance or token (from metalogic and metamathematics) is an occurrence of a software element that is based on a type definition. [1]: 1.3.2  When created, an occurrence is said to have been instantiated, and both the creation process and the result of creation are called instantiation.

Examples

[edit]
Class instance
An object-oriented programming (OOP) object created from a class. Each instance of a class shares a data layout but has its own memory allocation.
Procedural instance
Although isn't common the use of this concept in computer science each procedure call also was considered an "instance" of the procedure in Simula. [1]: 1.3.2 
Computer instance
An occurrence of a virtual machine which typically includes storage, a virtual CPU.
Polygonal model
In computer graphics, it can be instantiated in order to be drawn several times in different locations in a scene which can improve the performance of rendering since a portion of the work needed to display each instance is reused.
Program instance
In a POSIX-oriented operating system, it refers to an executing process. It is instantiated for a program via system calls such as fork() and exec(). Each executing process is an instance of a program which it has been instantiated from.[2]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In computer science, an instance refers to a specific, concrete realization or occurrence of a more general type, class, or structure, serving as a fundamental concept across multiple subfields.[1] In object-oriented programming (OOP), an instance is an object created from a class definition, embodying the class's attributes and methods with its own unique state and identity, allowing for multiple instances to coexist independently.[2] For example, if a class defines a "Car" with properties like color and speed, each instantiated car object represents a distinct instance.[3] In database systems, an instance denotes the actual content or state of a database at a given moment, comprising the collection of data stored within its schema, which contrasts with the schema's abstract structure.[4] This usage highlights the dynamic nature of data, where updates or queries alter the instance without changing the underlying design.[5] Similarly, in machine learning, an instance is an individual data point or example from a dataset, typically represented as a feature vector, used for training models or making predictions.[6] In cloud computing, an instance often refers to a virtual machine or computing resource provisioned from a template. In operating systems, it can denote a running process or execution of a program. These varied applications underscore the term's versatility in modeling specificity within broader abstractions.[7]

Object-Oriented Programming

Definition in OOP

In object-oriented programming (OOP), an instance refers to a concrete, runtime entity created from a class, embodying a unique object that encapsulates data and functionality specific to that realization. This instance serves as the tangible manifestation of the class's abstract definition, allowing multiple such entities to exist independently within a program. Each instance possesses three fundamental attributes: identity, state, and behavior. The identity provides a unique reference distinguishing it from other instances, even those of the same class, enabling direct addressing in memory. The state consists of the current values of the instance's data fields, which can vary among instances and evolve over time. Behavior is defined by the methods inherited from the class, dictating how the instance responds to messages or operations.[8] The term "instance" and its conceptual foundation originated in early OOP languages, gaining prominence through Smalltalk in the 1970s at Xerox PARC under Alan Kay, where objects were explicitly described as instances of classes with independent state and message-handling capabilities.[9] This usage was later adopted in C++, introduced by Bjarne Stroustrup in the 1980s as an extension of C to support OOP paradigms including class-based instantiation.[10] By the 1990s, it became standard in Java, where instances are explicitly created to model real-world entities with bundled state and behavior.[11] For example, in Java, the declaration Car myCar = new Car(); instantiates a Car object named myCar, which holds its own attribute values (state) like color and speed, while inheriting methods (behavior) such as drive() from the Car class.

Instance Creation and Lifecycle

In object-oriented programming, the process of creating an instance, known as instantiation, typically involves allocating memory for the new object and initializing its state. In languages like Java and C++, this is achieved using the new keyword, which dynamically allocates memory on the heap and invokes the object's constructor to set initial values. For example, in Java, the expression new ClassName(arguments) allocates storage and returns a reference to the newly created object. Similarly, in C++, the new expression allocates memory via the global operator new and then constructs the object by calling its constructor, returning a pointer to it. Python handles instantiation differently, without an explicit new keyword; calling a class as a function, such as ClassName(arguments), creates the instance on the heap and automatically invokes the __init__ method if defined.[12][13] Constructors play a crucial role in this process by ensuring the instance is properly initialized upon creation, setting up its internal state with provided arguments or default values. They are special methods named after the class (or __init__ in Python) that run automatically during instantiation and have no return type other than the implicit object reference or pointer. For instance, consider a simple Point class in Java:
public class Point {
    private int x, y;

    public Point(int x, int y) {
        this.x = x;
        this.y = y;
    }
}
Here, Point p = new Point(3, 4); allocates the object on the heap and calls the constructor to assign the coordinates. An equivalent in Python uses __init__:
class Point:
    def __init__(self, x, y):
        self.x = x
        self.y = y

p = Point(3, 4)
In C++, the constructor is similarly invoked after heap allocation:
class Point {
private:
    int x, y;
public:
    Point(int x, int y) : x(x), y(y) {}
};

Point* p = new Point(3, 4);
This initialization establishes the instance's identity and mutable state, distinguishing it from other instances of the same class. Constructors can be overloaded to support various initialization scenarios, but they must complete successfully for the instance to be usable.[14][13] The lifecycle of an instance encompasses three primary stages: creation, usage, and destruction. During creation, as described, memory is allocated on the heap—Java's JVM heap for all objects, Python's private managed heap for Python objects, and C++'s free store via new—with references or pointers providing access without direct memory management by the programmer in most cases. In the usage stage, the instance maintains its state through method invocations, which can modify attributes or perform operations; for example, a move() method on the Point instance might update x and y based on current values, preserving the object's identity while altering its state. References to the instance are passed by value in Java and Python (copying the reference, not the object) or by pointer in C++, enabling shared access without duplicating the heap-allocated data.[15][16] Destruction occurs when the instance is no longer needed, reclaiming heap memory to prevent leaks. In managed languages like Java and Python, automatic garbage collection handles this: Java's GC identifies unreachable objects (those without live references) and deallocates them, potentially invoking a finalize() method if overridden, though this is deprecated in favor of try-with-resources or cleaners. Python employs reference counting combined with cyclic GC to detect and deallocate objects when their reference count reaches zero or cycles are broken, calling __del__ if defined. In C++, destruction is manual; the programmer uses delete to invoke the destructor (which cleans up resources) and deallocate memory via operator delete, ensuring explicit control over the lifecycle to avoid dangling pointers or leaks. Throughout the lifecycle, heap allocation supports dynamic sizing and polymorphism, but requires careful reference management to maintain efficiency and correctness.[17]

Distinction from Class

In object-oriented programming, a class serves as an abstract template that defines the structure and behavior for a type of entity, including attributes (often called instance variables) and methods, while an instance represents a concrete, mutable realization of that class with its own specific state.[18] The class itself does not hold data values but provides the blueprint for creating and operating on instances, whereas each instance maintains independent values for the attributes defined by the class, allowing for mutability and individuality.[19] This distinction ensures that the class remains a static definition, focused on commonality, while the instance embodies runtime-specific details. A single class can produce multiple instances, each possessing its own unique state but sharing the same methods and behavioral logic inherited from the class.[20] For example, consider a BankAccount class that defines attributes like balance and methods such as deposit and withdraw; one instance, account1, might have a balance of $100, while another, account2, has a balance of $200—changes to one instance's state do not affect the others, yet both leverage the class's shared methods for operations. This separation supports encapsulation, where instance state is private and accessible only through class-defined methods, promoting modularity and reuse.[18] The instance-class distinction underpins key OOP principles like polymorphism and inheritance, as instances enable runtime variations in behavior across related classes. Through polymorphism, instances of subclasses can be substituted for instances of a superclass, allowing a method to invoke different implementations based on the actual instance type at execution time.[21] In inheritance hierarchies, instances inherit and potentially override class behaviors, facilitating flexible, extensible systems where variations emerge dynamically via specific instances rather than rigid class definitions alone.[19]

Database Management Systems

Database Instance

In database management systems (DBMS), a database instance refers to the active runtime environment that manages data storage, retrieval, and processing, consisting of memory structures, background processes, and associated files that enable the DBMS to operate. This in-memory representation allows the DBMS to handle concurrent user requests, maintain data consistency, and perform recovery operations without directly altering the persistent database files on disk.[22][23] Key components of a database instance include shared memory pools for caching data and query plans, control files that record metadata about the database structure, redo logs for capturing transaction changes to support recovery, and background processes or services that automate maintenance tasks. For example, in Oracle Database, the System Global Area (SGA) serves as the primary shared memory pool, while background processes such as the Log Writer (LGWR) flush redo entries to disk and the Database Writer (DBWn) updates data files from the buffer cache; control files store locations of data files and redo logs, ensuring the instance can locate and mount the database. Similarly, in Microsoft SQL Server, the instance encompasses the Database Engine's buffer pool for data caching, along with services like the SQL Server service for query execution and the Log Manager for handling transaction logs, with memory allocation configurable via min/max server memory settings to optimize performance.[22][24][25] The startup process of a database instance typically begins with a cold start, where the system loads necessary structures from disk into memory to transition from an inactive state to full operation. In Oracle, this involves the STARTUP command progressing through phases: nomount to allocate the SGA and start background processes, mount to read control files and validate the database structure, and open to make data files and redo logs accessible for transactions. In SQL Server, startup occurs by initiating the SQL Server service via the Windows Service Control Manager or SQL Server Configuration Manager, which allocates memory (including the buffer pool), starts worker threads, and loads system databases like master and model to prepare for user connections. This process ensures the instance is ready to manage workloads while minimizing downtime.[22][26][23] The term "database instance" was standardized in the early relational DBMS era, with Oracle introducing it in its Version 2 release in 1979 as the first commercially available SQL-based system, and IBM DB2 adopting a similar architecture upon its announcement in 1983.[27][28]

Schema vs. Instance

In relational database management systems (RDBMS), the schema serves as the logical blueprint defining the structure of the database, including tables, columns, data types, relationships, and constraints.[29] This structure is typically specified using Data Definition Language (DDL) statements, such as CREATE TABLE, which outline the framework without containing actual data values.[30] Introduced in the foundational relational model, the schema corresponds to the "relation heading," encompassing attribute names, domains, and keys to ensure data integrity and normalization.[31] In contrast, the database instance represents the actual populated data within that schema at a specific point in time, consisting of rows (tuples) that conform to the defined structure.[31] This instance is dynamic, changing through insertions, updates, or deletions via Data Manipulation Language (DML) operations, and it is governed by the ACID properties to maintain reliability during transactions: Atomicity ensures all operations complete or none do; Consistency preserves schema-defined rules; Isolation prevents interference between concurrent transactions; and Durability guarantees committed changes persist despite failures.[32] The primary distinction lies in their nature and usage: the schema is static and descriptive, providing a persistent template for data organization that evolves infrequently through DDL modifications, while the instance is dynamic and queryable, allowing retrieval and analysis of current data states using commands like SQL SELECT.[31] For example, an employee table schema might define columns for ID (integer primary key), Name (varchar), and Department (varchar with foreign key constraint), whereas a corresponding instance could include specific rows such as {ID: 1, Name: "Alice Smith", Department: "Engineering"}.[29] This separation enables the relational model to support data independence, where changes to the instance do not alter the schema, and vice versa, facilitating scalable and maintainable database design.[31]

Cloud Computing

Virtual Machine Instance

A virtual machine (VM) instance in cloud computing is a software-emulated computer that provides an isolated, on-demand compute environment, running its own operating system and applications on shared physical hardware managed by a cloud provider.[33] It abstracts the underlying physical infrastructure through virtualization technology, allowing users to deploy and manage virtual servers without direct access to the host hardware.[34] VM instances are typically hosted on hypervisors such as KVM, an open-source Linux kernel module that enables hardware-assisted virtualization, or VMware's vSphere platform, which supports enterprise-scale VM management.[35] A seminal example is Amazon EC2, launched in 2006, which pioneered on-demand VM provisioning in the cloud.[36] Key features of a VM instance include configurable compute resources like CPU cores and clock speed, memory allocation, and attached storage volumes, enabling customization for diverse workloads such as web hosting or data processing.[37] These instances maintain strong isolation from the host operating system and other VMs through hypervisor-enforced partitioning of resources, ensuring security and fault tolerance even if one instance fails.[38] This isolation is achieved via hardware virtualization extensions in modern CPUs, which prevent direct access to physical devices.[39] Provisioning a VM instance occurs on-demand through cloud provider APIs, where users specify parameters like instance type, operating system image, and network configuration to instantiate the VM in seconds or minutes.[37] Billing is usage-based, typically charged per hour or second of runtime, depending on the provider and instance configuration, with costs scaling according to allocated resources. For instance, on AWS, users can launch an m5.large EC2 instance—offering 2 vCPUs and 8 GiB of memory—pre-installed with Ubuntu Linux via the AWS Management Console or CLI, providing a ready-to-use environment for application deployment.[40][41]

Instance Scaling and Management

In cloud computing, instance scaling refers to the dynamic adjustment of virtual machine (VM) resources to meet varying workload demands, ensuring performance, availability, and cost efficiency. Horizontal scaling involves adding or removing instances to distribute load, while vertical scaling modifies resources within a single instance. These techniques are essential for managing scalable infrastructure, often automated through cloud provider services.[42] Horizontal scaling, also known as scaling out or in, dynamically adjusts the number of VM instances in a group based on metrics like CPU utilization or traffic volume. For example, Amazon EC2 Auto Scaling groups allow users to define minimum, desired, and maximum instance counts—such as a minimum of four instances and a maximum of twelve—and apply scaling policies that automatically launch or terminate instances to maintain application availability across Availability Zones. This approach enhances fault tolerance and cost optimization by using a mix of On-Demand and Spot Instances, with features like automated Spot replacement to handle interruptions. Benefits include improved elasticity for unpredictable workloads, as seen in web applications that scale during peak hours.[42][43] Vertical scaling, or scaling up/down, resizes the computational resources of an existing VM instance, such as increasing CPU cores or memory allocation, which typically requires stopping and restarting the instance and may incur downtime. In AWS EC2, this is achieved by changing instance types, for instance upgrading from a t2.micro (1 vCPU, 1 GiB RAM) to a t2.medium (2 vCPUs, 4 GiB RAM), provided the root volume uses Elastic Block Store (EBS) and the new type is compatible. The process involves stopping the instance, selecting a new type, and restarting, with tools like AWS Compute Optimizer recommending optimal sizes based on historical usage to avoid over-provisioning. Limitations include potential incompatibility with certain instance store configurations, requiring migration to a new instance, and it is best suited for workloads that benefit from single-instance performance boosts rather than distributed processing.[44][45][46] Instance management encompasses monitoring, orchestration, and lifecycle controls to maintain operational integrity. Monitoring tools like Amazon CloudWatch collect metrics such as CPU, memory, and disk usage from EC2 instances via agents, enabling dashboards and alarms that trigger scaling actions when thresholds are breached, such as alerting on high latency to initiate horizontal scaling. For orchestration, Kubernetes automates the deployment, scaling, and management of containerized applications on VM instances, supporting horizontal pod autoscaling based on CPU/memory metrics and self-healing through container restarts or rescheduling for high availability. Termination, a key management step, permanently stops instances to free resources, involving a graceful shutdown by default; in Auto Scaling groups, terminated instances are often replaced automatically, but users must back up data to persistent storage like EBS volumes to prevent loss, as instance store data is ephemeral.[47][48][49][50] Security in instance scaling and management relies on mechanisms that secure access and configuration without exposing credentials. The EC2 Instance Metadata Service (IMDS) provides secure, instance-local access to metadata like instance ID, AMI details, and user data scripts via a link-local endpoint (http://169.254.169.254), aiding automated configuration during scaling events while recommending IMDSv2 for protection against server-side request forgery attacks. IAM roles for EC2 instances grant temporary permissions to AWS services, allowing instances to access resources like S3 buckets without embedding long-term access keys, thus reducing credential compromise risks during scaling operations where new instances inherit roles for consistent security. These features ensure that scaling maintains compliance and isolation, with policies enforcing least-privilege access across instance fleets.[51][52][53][54]

Other Contexts

In Machine Learning

In machine learning, an instance refers to a single data sample or example within a dataset, typically represented as a row in a data table consisting of a set of input features (also called attributes or variables) and, in the case of supervised learning, an associated output label or target value. This conceptualization aligns with foundational definitions in the field, where datasets are collections of such instances used to train models that generalize patterns to unseen data.[55] For example, in supervised learning tasks like classification or regression, each instance provides the model with concrete evidence of the relationship between features and outcomes, enabling algorithms to learn decision boundaries or predictive functions. Instances play a central role in the training phase of various machine learning algorithms, particularly instance-based methods that rely directly on the training examples for inference rather than building an explicit model. In the k-nearest neighbors (k-NN) algorithm, a new instance is classified by finding the k most similar training instances based on a distance metric, such as Euclidean distance, and assigning the most common label among them, making it a lazy learner that stores all instances during training.[56] Similarly, in support vector machines (SVM), instances are used to construct a hyperplane that maximizes the margin between classes, with the most influential ones—known as support vectors—being those closest to the boundary and retained to define the model's decision function.[57] These approaches highlight how instances serve as the building blocks for pattern recognition, allowing models to adapt to complex data distributions without assuming parametric forms. A representative example of an instance can be found in the Iris dataset, a classic benchmark for classification tasks introduced by Ronald Fisher in 1936 and hosted by the UCI Machine Learning Repository. Each of the 150 instances in this dataset describes measurements of an iris flower, such as one instance with features {sepal_length: 5.1, sepal_width: 3.5, petal_length: 1.4, petal_width: 0.2} and label "Iris-setosa," which contributes to training models to distinguish between three species based on floral dimensions. Preprocessing instances is often essential to ensure data quality before training, addressing issues like missing values or varying scales that could bias algorithms. Techniques such as imputation replace missing feature values in an instance with statistical estimates (e.g., the mean of the column), while normalization scales features to a common range, such as unit variance, to prevent dominance by larger-valued attributes.[58][59] These methods were popularized in practical implementations through the scikit-learn library, first developed in 2007 as an open-source toolkit for Python-based machine learning.[60]

In Operating Systems

In operating systems, an instance refers to a process, which is a program in execution possessing its own address space, including text, data, heap, and stack segments, along with a unique process identifier (PID) and associated resources such as open files and CPU registers.[61][62] This encapsulation allows the operating system to manage multiple such instances independently, ensuring isolation and controlled sharing of system resources.[63] Process instances are created through system calls, notably the fork() function in Unix-like systems, which duplicates the calling parent process to produce a child process with nearly identical state but a distinct PID and execution path.[64][63] Multi-threading extends this by enabling multiple concurrent threads within a single process instance; each thread maintains its own program counter, stack, and registers but shares the process's memory and resources, facilitating efficient parallelism without full process duplication.[65][66] A practical example is executing the ls command in a Unix shell, which invokes the shell to create a new child process instance via fork() followed by exec() to load the ls executable; this instance receives a unique PID and runs briefly to list directory contents before terminating, all under the oversight of the operating system's scheduler.[67][64] The operating system manages these process instances through resource allocation and context switching, where the CPU state of the current process— including registers, program counter, and memory mappings—is saved upon interruption, and the state of the next ready process is restored to enable multitasking on shared hardware.[68] This mechanism originated in 1960s time-sharing systems like Multics, which used descriptor base registers to accelerate switches between processes by minimizing address space reconfiguration.[69][70] In cloud computing environments, virtual machine instances host operating systems that in turn manage their internal process instances for application execution.

References

User Avatar
No comments yet.