Hubbry Logo
Computer simulationComputer simulationMain
Open search
Computer simulation
Community hub
Computer simulation
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Computer simulation
Computer simulation
from Wikipedia
A 48-hour computer simulation of Typhoon Mawar using the Weather Research and Forecasting model
Process of building a computer model, and the interplay between experiment, simulation, and theory

Computer simulation is the running of a mathematical model on a computer, the model being designed to represent the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.[1]

Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program.[2] Other examples include a 1-billion-atom model of material deformation;[3] a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005;[4] a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.[5]

Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.[6]

Simulation versus model

[edit]

A model consists of the equations used to capture the behavior of a system. By contrast, computer simulation is the actual running of the program that perform algorithms which solve those equations, often in an approximate manner. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model (or a simulator)", and then either "run the model" or equivalently "run a simulation".

History

[edit]

Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.[7]

Data preparation

[edit]

The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).

Input sources also vary widely:

  • Sensors and other physical devices connected to the model;
  • Control surfaces used to direct the progress of the simulation in some way;
  • Current or historical data entered by hand;
  • Values extracted as a by-product from other processes;
  • Values output for the purpose by other simulations, models, or processes.

Lastly, the time at which data is available varies:

  • "invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
  • data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
  • data can be provided during the simulation run, for example by a sensor network.

Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others.

Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis"[8] to confirm that values output by the simulation will still be usefully accurate.

Types

[edit]

Models used for computer simulations can be classified according to several independent pairs of attributes, including:

  • Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
  • Steady-state or dynamic
  • Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
  • Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
  • Local or distributed.

Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:

  • Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
  • If the underlying graph is not a regular grid, the model may belong to the meshfree method class.

For steady-state simulations, equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.

  • Dynamic simulations attempt to capture changes in a system in response to (usually changing) input signals.
  • Stochastic models use random number generators to model chance or random events;
  • A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
  • A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
  • A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
  • Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).

Visualization

[edit]

Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.

Other applications of CGI computer simulations are being developed[as of?] to graphically display large amounts of data, in motion, as changes occur during a simulation run.

In science

[edit]
Computer simulation of the process of osmosis

Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:

Specific examples of computer simulations include:

  • statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
  • agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
  • time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
  • computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
  • computer simulation using molecular modeling for drug discovery.[10]
  • computer simulation to model viral infection in mammalian cells.[9]
  • computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.[11]
  • Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
  • An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.

Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.

In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology,[12] which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling [13]

In practical contexts

[edit]

Computer simulations are used in a wide variety of practical contexts, such as:

The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention[editorializing] in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.[15]

Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.

In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.

Pitfalls

[edit]

Although sometimes ignored in computer simulations, it is very important[editorializing] to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Computer simulation is a computational technique that employs digital models to replicate and analyze the behavior of real-world or hypothetical systems, enabling the study of dynamic processes, , and outcome prediction through numerical methods. The origins of computer simulation trace back to , when pioneered its use for modeling military logistics and decision-making under uncertainty. In the 1940s, the development of electronic computers like facilitated early applications, such as the for simulating particle behavior in nuclear research at Los Alamos. By the 1950s and 1960s, simulations expanded into and with tools like , evolving alongside computing advancements to handle increasingly complex models across disciplines. Key methods in computer simulation include , which models state changes at specific points in time, such as queueing systems; continuous simulation, which solves differential equations for systems with smooth variations like ; Monte Carlo simulation, relying on random sampling to estimate probabilistic outcomes; and agent-based modeling, simulating interactions among autonomous entities to emerge complex phenomena. These approaches often integrate mathematical, statistical, and logical frameworks to mimic , with validation through comparison to empirical data. Applications of computer simulation span diverse fields, including scientific for phenomena like galaxy formation or climate modeling where experiments are infeasible; engineering for optimizing networks, robotics, and manufacturing processes; healthcare for patient flow analysis; and business for market forecasting and risk assessment. In cybersecurity and AI, it tests vulnerabilities and agent behaviors, while in social sciences, it explores emergent patterns like urban segregation. Computer simulations play a crucial role in modern and by supplementing physical experiments, providing insights into sparse-data environments, and enabling "what-if" analyses to reduce real-world risks. Emerging trends integrate , , and digital twins for real-time, data-driven predictions, enhancing accuracy and applicability in fields like immersive technologies and adaptive systems.

Fundamentals

Definition and Overview

Computer simulation is the use of a to imitate the operation of real-world processes or systems over time, typically to investigate their behavior under different conditions or scenarios. This approach leverages computational power to replicate dynamic interactions, enabling virtual experimentation that mirrors physical, biological, or abstract phenomena without direct intervention in the actual environment. By serving as a bridge between theoretical models and empirical reality, computer simulation facilitates deeper insights into complex systems that are difficult or impossible to observe directly. At its core, a computer simulation comprises several basic components: input that specifies initial conditions and parameters, model algorithms that encode the rules governing evolution, a computational environment to execute the iterations, and output results that visualize or quantify the simulated outcomes. These elements work together to approximate real- dynamics, allowing users to input variables, run iterative calculations, and analyze emergent patterns. The process supports prediction by forecasting potential outcomes, optimization by testing configurations for efficiency, and safe experimentation by exploring "what-if" scenarios free from real-world risks such as financial loss or safety hazards. Illustrative examples highlight the versatility of computer simulation. A simple probabilistic simulation might replicate coin flips to estimate the likelihood of heads or tails over thousands of trials, demonstrating convergence to expected probabilities through repeated random events. In a more applied context, simulations of urban can model vehicle movements, signal timings, and congestion patterns to inform decisions that enhance mobility and reduce delays. Such cases underscore how simulations can be discrete or continuous, depending on the system's nature, as explored in subsequent sections.

Simulation versus Modeling

Modeling refers to the creation of abstract representations—mathematical, logical, or computational—of real-world systems to capture their essential behaviors and relationships. These representations simplify complex phenomena by focusing on key variables and interactions while abstracting away irrelevant details. The primary distinction between lies in their roles: modeling constructs the representational framework, often through static or dynamic equations that describe system states, whereas simulation involves iteratively executing that model computationally to produce dynamic outputs over time or across scenarios. In modeling, the emphasis is on and ; in simulation, the focus shifts to , where the model is "run" to observe evolving behaviors under specified conditions. Simulation builds directly on a model, requiring it as a foundational element but extending it through al execution, repeated iterations, and testing of varied scenarios to explore outcomes. Conversely, models can stand alone without , such as when analytical solutions allow direct of results without iterative runs. This relationship positions as an operational tool that animates models to mimic real-system dynamics. Inputs for often refine models with real to enhance accuracy. A key advantage of simulation over pure modeling is its ability to manage high levels of , nonlinearity, and that render analytical modeling intractable or overly simplistic. While analytical models may yield closed-form solutions for linear systems, simulations approximate behaviors in nonlinear environments by discretizing time or events, incorporating elements, and scaling to multifaceted interactions. For instance, the logistic equation, dPdt=rP(1PK)\frac{dP}{dt} = rP\left(1 - \frac{P}{K}\right), serves as a for , where PP is , rr is the growth rate, and KK is the ; it can be solved analytically to predict equilibrium. Simulating this model computationally, however, allows with varying conditions or parameters—such as fluctuating rr due to environmental —to generate time-series predictions and visualize trajectories under different scenarios.

History

Early Developments

The origins of computer simulation trace back to the pre-digital era, where analog mechanical devices were employed to model complex natural phenomena. In 1872–1873, William Thomson, later known as , designed and constructed the first , a mechanical that simulated tidal variations by mechanically summing multiple harmonic components derived from astronomical influences. This device used interconnected gears to represent the amplitudes and phases of cyclic motions from the , Sun, and , with a hand-cranked mechanism tracing predicted heights on paper over a year's period in just four hours. Such innovations marked an early form of simulation by replicating physical processes through mechanical linkages rather than direct observation. Building on these foundations, mechanical integrators emerged in the late to solve differential equations, providing a means to simulate dynamic systems like fluid flows and oscillations. James Thomson, Lord Kelvin's brother, developed disk-and-sphere integrators in the 1870s, initially for analyzing data by integrating functions from graphical records. By 1886, these were refined into practical mechanical integrators for prediction and extended to broader applications, such as analysis, laying the groundwork for analog computation of continuous processes. Vannevar Bush's 1931 differential analyzer at MIT further advanced this by linking multiple Thomson-style integrators with torque amplifiers to handle higher-order differential equations, simulating scenarios in and before electronic computers existed. The transition to digital simulation accelerated during , particularly through the , where early electronic computers addressed nuclear weapon design challenges. The , completed in 1946, was repurposed from artillery calculations to perform simulations of nuclear implosions and behavior, requiring extensive manual reconfiguration but enabling unprecedented computational scale for Los Alamos scientists. played a pivotal role, contributing to the in 1946 alongside Stanislaw Ulam, a probabilistic technique that modeled random paths in atomic bomb assemblies using statistical sampling on early computers. This approach, first implemented on in 1948, revolutionized simulations of uncertain physical processes by approximating solutions to intractable equations through repeated random trials. Punch-card systems and rudimentary programming facilitated these early digital efforts, processing vast datasets for simulations of physical phenomena like in explosive implosions at Los Alamos from 1944 onward. IBM tabulators and sorters, handling millions of cards, computed hydrodynamic equations iteratively, bridging mechanical data handling with electronic execution. By the early 1950s, this infrastructure supported the first numerical weather simulations; in 1950, Jule Charney, Arnt Eliassen, and used to integrate the barotropic over 24 hours, producing a rudimentary 24-hour forecast that validated computational . These milestones established as a core computational tool, influencing subsequent high-performance methods.

Modern Advances

The 1970s and 1980s marked a significant expansion in computer simulation capabilities, particularly through the rise of finite element methods (FEM) for engineering applications. FEM, which discretizes complex structures into smaller elements to approximate solutions to partial differential equations, gained prominence for simulating dynamic behaviors such as structural crashes and vibrations in automotive and aerospace designs. This period also saw the extending influence of , a programming language originally developed in the 1960s, which introduced object-oriented concepts and class structures that facilitated more modular and reusable simulation code, impacting subsequent discrete event simulations. Key milestones included the system dynamics model, used in the 1972 report to simulate global interactions among population, industrial output, resources, and pollution, projecting potential if growth trends continued unchecked. Another was the Daisyworld model, introduced in 1983, which demonstrated planetary self-regulation through a simple simulation of black and white daisies modulating surface to stabilize temperature under varying , supporting the . In the and , simulations increasingly integrated with (HPC), enabling larger-scale and more realistic models across disciplines. This era leveraged parallel processing and supercomputers to handle complex computations, such as and electromagnetic simulations, accelerating adoption in scientific research and industry. A notable example was a 1997 U.S. military simulation of a desert battle, modeling 66,239 vehicles including tanks and trucks on dynamic terrain to evaluate tactical scenarios, showcasing HPC's role in . These advancements allowed for petascale simulations by the mid-, transforming fields like and . From the 2010s onward, pursuits toward exascale computing—systems capable of at least one exaFLOP (10^18 floating-point operations per second)—have driven unprecedented simulation fidelity, with milestones including the deployment of the Frontier supercomputer in 2022 as the first exascale system, followed by Aurora in 2024 as the second, and JUPITER in 2025 as Europe's first exascale supercomputer. Such platforms enable detailed climate models underpinning IPCC reports, like those in the Sixth Assessment Report (2021), which use coupled atmosphere-ocean simulations to project global warming scenarios under various emission pathways, achieving resolutions down to kilometers for regional impacts. In biology, AlphaFold's 2020 breakthrough integrated deep learning with evolutionary data to predict protein structures with atomic accuracy (median GDT_TS score of 92.4 in CASP14), revolutionizing folding simulations by reducing computation time from years to hours and aiding drug discovery. By 2025, trends in emphasize open-source tools and -based platforms, democratizing access to high-fidelity modeling. Open-source frameworks like those extending CloudSim facilitate scalable simulations of distributed systems, while services support on-demand HPC for collaborative , with the global -based simulation market projected to grow at 15-20% annually due to cost efficiencies and elasticity. Recent advances also incorporate AI for enhanced predictive accuracy, as explored in subsequent sections.

Classification

Discrete and Continuous Simulations

Computer simulations are classified into discrete and continuous types based on how they handle the progression of time and state changes in the modeled system. Discrete simulations advance the system state only at specific, predefined event times, while continuous simulations model variables that evolve smoothly over time. This distinction allows for tailored approaches to different types of systems, such as event-driven processes versus those governed by physical laws. In discrete simulations, the system's state remains constant between discrete events, and updates occur instantaneously at event occurrences, such as arrivals or departures in a queueing system. The key method is event scheduling, where future events are queued and processed in chronological order to simulate the system's efficiently without unnecessary computations during periods. A basic representation uses a difference , such as Xt+1=f(Xt,e)X_{t+1} = f(X_t, e), where XtX_t is the state at time tt, ee denotes the event, and ff defines the state transition. This approach is particularly suited for modeling manufacturing assembly lines, where events like part arrivals or machine failures drive the simulation. Continuous simulations, in contrast, represent systems where state variables change continuously and smoothly over time, often described by ordinary differential equations (ODEs). These are solved numerically using integration methods, such as the Runge-Kutta family of algorithms, which approximate the solution by evaluating the derivative at multiple points within each time step for improved accuracy./06:_ContinuousTime_Models_I__Modeling/6.04:_Simulating_Continuous-Time_Models) A foundational ODE form is dxdt=f(x,t)\frac{dx}{dt} = f(x, t), where xx is the state variable and ff captures the rate of change. Examples include simulating chemical reaction kinetics, where concentrations evolve continuously according to reaction rates, or fluid dynamics in pipelines modeled via Navier-Stokes equations. Hybrid simulations integrate discrete and continuous elements to model complex systems that exhibit both behaviors, such as cyber-physical systems where discrete control logic interacts with continuous physical processes. These approaches synchronize event-driven updates with steps, often using tools like hybrid automata to manage transitions between modes. For instance, in automotive control systems, discrete triggers may adjust continuous dynamics.

Stochastic and Deterministic Simulations

Deterministic simulations produce outputs that are entirely predictable and fixed given the same initial conditions and inputs, as they rely on solving mathematical equations without any incorporation of random variables. For instance, in physics, deterministic simulations are commonly used to model planetary motion by numerically integrating Newton's , yielding precise trajectories for celestial bodies under gravitational forces. Similarly, in , deterministic simulations of involve solving systems of differential equations to predict voltage and current behaviors in response to deterministic inputs, enabling reliable verification of hardware performance without variability. In contrast, stochastic simulations explicitly incorporate randomness to model uncertainty and variability inherent in real-world systems, often through probabilistic models that generate a distribution of possible outcomes rather than a single fixed result. A prominent example is the , which uses repeated random sampling to approximate solutions to complex problems, such as estimating integrals or probabilities in high-dimensional spaces. Originally developed for neutron diffusion calculations, this approach relies on generating sequences of random numbers to simulate random processes, providing statistical estimates of expected values. In finance, stochastic simulations via methods are widely applied for , such as valuing options or forecasting portfolio losses by modeling uncertain market variables like stock price fluctuations. Key techniques in stochastic simulations include the use of pseudo-random number generators (PRNGs) to produce sequences that mimic true for efficient computation, as true random sources are impractical for large-scale runs. Seminal work on PRNGs emphasizes algorithms like linear congruential generators, which ensure statistical properties suitable for simulations while remaining deterministic for reproducibility. To improve accuracy and reduce computational cost, methods such as are employed; this technique shifts the sampling distribution toward regions of higher importance for the integrand, minimizing the variance of the estimator compared to crude sampling. The core of the for can be expressed as follows: for an I=abf(x)dxI = \int_a^b f(x) \, dx, where f(x)f(x) is integrable over [a,b][a, b], the estimate is I^N=(ba)1Ni=1Nf(Xi),\hat{I}_N = (b - a) \cdot \frac{1}{N} \sum_{i=1}^N f(X_i), with XiX_i drawn independently from a uniform distribution on [a,b][a, b]; as NN \to \infty, I^N\hat{I}_N converges to II by the . Due to the stochastic nature, outputs include to quantify uncertainty, typically constructed via the : for large NN, I^N\hat{I}_N is approximately normally distributed with mean II and σ/N\sigma / \sqrt{N}
Add your contribution
Related Hubs
User Avatar
No comments yet.