Recent from talks
Contribute something
Nothing was collected or created yet.
3D computer graphics
View on WikipediaThis article needs additional citations for verification. (April 2017) |
| Three-dimensional (3D) computer graphics |
|---|
| Fundamentals |
| Primary uses |
| Related topics |
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data (often Cartesian) stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later (possibly as an animation) or displayed in real time.[1]
3D computer graphics, contrary to what the name suggests, are most often displayed on two-dimensional displays. Unlike 3D film and similar techniques, the result is two-dimensional, without visual depth. More often, 3D graphics are being displayed on 3D displays, like in virtual reality systems.
3D graphics stand in contrast to 2D computer graphics which typically use completely different methods and formats for creation and rendering.
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and similarly, 3D may use some 2D rendering techniques.[1]
The objects in 3D computer graphics are often referred to as 3D models. Unlike the rendered image, a model's data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model.[2]
History
[edit]William Fetter was credited with coining the term computer graphics in 1961[3][4] to describe his work at Boeing. An early example of interactive 3-D computer graphics was explored in 1963 by the Sketchpad program at Massachusetts Institute of Technology's Lincoln Laboratory.[5] One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1972 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.[6]
3-D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3-D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II.[7][8]
Virtual Reality 3D is a version of 3D computer graphics.[9] With the first headset coming out in the late 1950s, the popularity of VR didn't take off until the 2000s. In 2012 the Oculus was released and since then, the 3D VR headset world has expanded.[10]
Overview
[edit]3D computer graphics production workflow falls into three basic phases:
- 3D modeling – the process of forming a computer model of an object's shape
- Layout and CGI animation – the placement and movement of objects (models, lights etc.) within a scene
- 3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate (rasterize the scene into) an image[11]
Modeling
[edit]The modeling describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on a computer with a 3D modeling tool, or models scanned into a computer from real-world objects (Polygonal Modeling, Patch Modeling and NURBS Modeling are some popular tools used in 3D modeling). Models can also be produced procedurally or via physical simulation.[11]
Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertices (a triangle). A polygon of n points is an n-gon.[12] The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.[11]
Layout and animation
[edit]Before rendering into an image, objects must be laid out in a 3D scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion-capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.
Stop Motion has multiple categories within such as Claymation, Cutout, Silhouette, Lego, Puppets, and Pixelation.[9]
Claymation is the use of models made of clay used for an animation. Some examples are Clay Fighter and Clay Jam.[9]
Lego animation is one of the more common types of stop motion. Lego stop motion is the use of the figures themselves moving around. Some examples of this are Lego Island and Lego Harry Potter.[9]
Materials and textures
[edit]Materials and textures are properties that the render engine uses to render the model. One can give the model materials to tell the render engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump map or normal map. It can be also used to deform the model itself using a displacement map.
Rendering
[edit]Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3-D computer graphics software or a 3-D graphics API.
Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3-D modeling and CAD software may perform 3-D rendering as well (e.g., Autodesk 3ds Max or Blender), exclusive 3-D rendering software also exists (e.g., OTOY's Octane Rendering Engine, Maxon's Redshift)
- Examples of 3-D rendering
-
A 3-D model of a Dunkerque-class battleship rendered with flat shading
-
During the 3-D rendering step, the number of reflections "light rays" can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Rendered with Cobalt.
-
A 3-D rendering of a penthouse
Software
[edit]3-D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering or produces 3-D models for analytical, scientific and industrial purposes.
File formats
[edit]There are many varieties of files supporting 3-D graphics, for example, Blender (.blend), Wavefront (.obj), .fbx and .x DirectX files. Each file type generally tends to have its own unique data structure.
Each file format can be accessed through their respective applications, such as DirectX files, and Quake. Alternatively, files can be accessed through third-party standalone programs, or via manual decompilation.
Modeling
[edit]3-D modeling software is a class of 3-D computer graphics software used to produce 3-D models. Individual programs of this class are called modeling applications or modelers.
3-D modeling starts by describing 3 display models: Drawing Points, Drawing Lines and Drawing triangles and other Polygonal patches.[13]
3-D modelers allow users to create and alter models via their 3-D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out.
3-D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications.
Most 3-D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).
Computer-aided design (CAD)
[edit]Computer aided design software may employ the same fundamental 3-D modeling techniques that 3-D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and computer-aided architectural design.
Complementary tools
[edit]After producing a video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the mid-level, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.
Use of real-time computer graphics engines to create a cinematic production is called machinima.[14]
Other types of 3D appearance
[edit]Photorealistic 2D graphics
[edit]Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photo-realistic effects without the use of filters.[15]
2.5D
[edit]Some video games use 2.5D graphics, involving restricted projections of three-dimensional environments, such as isometric graphics or virtual cameras with fixed angles, either as a way to improve performance of the game engine or for stylistic and gameplay concerns. By contrast, games using 3D computer graphics without such restrictions are said[by whom?] to use true 3D.
Other forms of animation
[edit]Cutout is the use of flat materials such as paper. Everything is cut out of paper including the environment, characters, and even some props. An example of this is Paper Mario.[9] Silhouette is similar to cutouts except they are one solid color, black. Limbo is an example of this.[9] Puppets are dolls and different puppets used in the game. An example of this would be Yoshi's Wooly World.[9] Pixelation is when the entire game appears pixelated, this includes the characters and the environment around them. One example of this is seen in Shovel Knight.[9]
See also
[edit]References
[edit]- ^ a b Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (2013). Computer Graphics: Principles and Practice (3rd ed.). Addison-Wesley. ISBN 978-0321399526.
- ^ "3D computer graphics". ScienceDaily. Retrieved 2019-01-19.
- ^ "An Historical Timeline of Computer Graphics and Animation". Archived from the original on 2008-03-10. Retrieved 2009-07-22.
- ^ "Computer Graphics". Learning Computer History. 5 December 2004.
- ^ Ivan Sutherland Sketchpad Demo 1963, 30 May 2012, retrieved 2023-04-25
- ^ "Pixar founder's Utah-made Hand added to National Film Registry". The Salt Lake Tribune. December 28, 2011. Retrieved January 8, 2012.
- ^ "Brutal Deluxe Software". www.brutaldeluxe.fr.
- ^ "Retrieving Japanese Apple II programs". Projects and Articles. neoncluster.com. Archived from the original on 2016-10-05.
- ^ a b c d e f g h Garg, Nitin (2024-11-15). "A Comprehensive Guide on Different Types of 3D Animation". BR Softech. Retrieved 2024-11-21.
- ^ Flynt, Joseph (2019-08-12). "The History of VR: When was it created and who invented it?". 3D Insider. Retrieved 2024-11-21.
- ^ a b c "A Beginner's Guide to the Concept of 3D in Computer Graphics". ThePro3DStudio. Retrieved 25 August 2025.
- ^ Simmons, Bruce. "n-gon". MathWords. Archived from the original on 2018-12-15. Retrieved 2018-11-30.
- ^ Buss, Samuel R. (2003-05-19). 3D Computer Graphics: A Mathematical Introduction with OpenGL. Cambridge University Press. ISBN 978-1-139-44038-7.
- ^ "Machinima". Internet Archive. Retrieved 2020-07-12.
- ^ "3D versus 2D Modeling Methods". Dummies.com. Retrieved 2025-05-08.
External links
[edit]- A Critical History of Computer Graphics and Animation (Wayback Machine copy)
- How Stuff Works - 3D Graphics
- History of Computer Graphics series of articles (Wayback Machine copy)
- How 3D Works - Explains 3D modeling for an illuminated manuscript
3D computer graphics
View on GrokipediaIntroduction
Definition and Principles
3D computer graphics refers to the computational representation and manipulation of three-dimensional objects and scenes in a virtual space, enabling the generation of visual representations on two-dimensional displays through algorithms and mathematical models.[8] This process simulates the appearance of real or imaginary 3D environments by processing geometric data to produce images that convey spatial relationships and depth cues.[9] In contrast to 2D graphics, which confine representations to a planar surface defined by x and y coordinates, 3D graphics introduce a z-axis to model depth, allowing for essential effects such as occlusion—where closer objects obscure farther ones—parallax shifts in viewpoint changes, and the depiction of volumetric properties like shadows and intersections.[10] This depth dimension is fundamental to achieving realistic spatial perception, as it enables computations for visibility determination and perspective distortion absent in flat 2D renderings.[11] Core principles rely on mathematical foundations, including vector algebra for representing positions as 3D vectors and surface normals as unit vectors to describe orientations.[12] Coordinate systems form the basis: Cartesian coordinates provide a straightforward Euclidean framework for object placement, while homogeneous coordinates extend points to four dimensions as (with for affine points), unifying transformations into matrix multiplications.[13] Transformations—such as translation by adding offsets, rotation via angle-axis or quaternion methods, and scaling by factors—are efficiently handled using 4×4 matrices in homogeneous space; for example, a translation matrix is: applied as .[14] Projection techniques map the 3D scene onto a 2D plane, with perspective projection mimicking human vision by converging parallel lines, using the equations , where is the distance from the viewpoint to the projection plane, effectively dividing coordinates by depth to create foreshortening.[15] In contrast, orthographic projection maintains parallel lines without depth division, preserving dimensions for technical illustrations but lacking realism.[11] These principles ensure that 3D graphics can accurately transform and render complex scenes while accounting for viewer position and spatial hierarchy.Applications and Impact
3D computer graphics have transformed numerous industries by enabling immersive and realistic visual representations. In video games, real-time rendering technologies power interactive experiences, with engines like Unreal Engine facilitating the creation of high-fidelity 3D environments for titles across platforms.[16][17] In film and visual effects (VFX), computer-generated imagery (CGI) creates entire worlds and characters, as exemplified by the photorealistic alien ecosystems and motion-captured performances in Avatar, which utilized full CGI environments and virtual camera systems to blend live-action with digital elements.[18][19] Beyond entertainment, 3D graphics support practical applications in design and visualization. In architecture, virtual walkthroughs allow stakeholders to navigate detailed 3D models of buildings before construction, enhancing design review and client engagement through real-time rendering in browsers or dedicated software.[20][21] Medical visualization leverages 3D models derived from CT or MRI scans to reconstruct organs, aiding surgeons in planning procedures and educating patients with anatomically accurate representations.[22][23] In product design, 3D prototyping enables rapid iteration of digital models, reducing time and costs in manufacturing by simulating physical properties and testing ergonomics virtually.[24][25] The impact of 3D computer graphics extends to economic, cultural, and technological spheres. Economically, the global computer graphics market is projected to reach USD 244.5 billion in 2025, driven by demand in gaming, media, and simulation sectors.[26] Culturally, advancements have fueled concepts like the metaverse, where persistent 3D virtual spaces foster social interactions and heritage preservation, potentially reshaping global connectivity and tourism.[27][28] Technologically, GPU evolution from the 1990s' basic 3D acceleration to the 2020s' ray-tracing hardware, such as NVIDIA's RTX series, has enabled real-time photorealism, democratizing advanced rendering for broader applications.[29][30] Societally, 3D graphics have enhanced accessibility through consumer hardware like affordable GPUs, allowing individuals to create and interact with complex visuals on personal devices without specialized equipment.[31] However, ethical concerns arise, particularly with 3D deepfakes, where generative AI produces hyper-realistic synthetic media that blurs reality, raising issues of misinformation, privacy invasion, and consent in visual content creation.[32][33]History
Early Foundations
The foundations of 3D computer graphics emerged in the 1960s through innovative academic research focused on interactive systems and basic geometric representations, primarily at institutions like MIT, where early efforts shifted from 2D sketching to three-dimensional visualization. Ivan Sutherland's Sketchpad, developed in 1963 as part of his PhD thesis at MIT, introduced the first interactive graphical user interface using a light pen on a vector display, enabling users to create and manipulate line drawings with constraints and replication—concepts that directly influenced subsequent 3D modeling techniques.[34] Although primarily 2D, Sketchpad's architecture served as a critical precursor to 3D graphics by demonstrating real-time interaction and hierarchical object manipulation on cathode-ray tube (CRT) displays.[35] Building on this, early wireframe rendering for 3D objects developed at MIT in the mid-1960s, extending Sutherland's ideas to three-dimensional space. For instance, Sketchpad III, implemented in 1963 on the TX-2 computer at MIT's Lincoln Laboratory, allowed users to construct and view wireframe models in multiple projections, including perspective, using a light pen for input and real-time manipulation of 3D polyhedral shapes.[36] These wireframe techniques represented objects as line segments connecting vertices, facilitating the visualization of complex geometries without surface filling, and were displayed on vector-based CRTs that drew lines directly via electron beam deflection.[37] Key milestones in the late 1960s advanced beyond pure wireframes toward polygonal surfaces. In 1969, researchers at General Electric's Computer Equipment Division conducted a study on one of the earliest applications of computer-generated imagery in a visual simulation system for flight training, employing edge-based representations of 3D objects with up to 500 edges per view, which required resolving visibility through priority lists and basic depth ordering.[38] This work highlighted the potential for polygons as building blocks for more realistic scenes, though limited by computational constraints to simple shapes. By 1975, the Utah teapot model, created by Martin Newell during his PhD research at the University of Utah, became a seminal test object for 3D rendering algorithms; Newell hand-digitized the teapot's bicubic patches into a dataset of 2,000 vertices and 1,800 polygons, providing a standardized benchmark for evaluating surface modeling, hidden-surface removal, and shading due to its intricate handle, spout, and lid details.[39] Hardware innovations were essential to these developments, with early vector displays dominating the 1960s for their ability to render precise lines without pixelation. Systems like the modified oscilloscope used in Sketchpad and subsequent MIT projects employed analog deflection to trace wireframes at high speeds, supporting interactive rates for simple 3D rotations and views, though they struggled with dense scenes due to flicker from constant refreshing.[40] The transition to raster displays in the 1970s, driven by falling semiconductor memory costs, enabled pixel-based rendering of filled polygons and colors; early examples included framebuffers on minicomputers like the PDP-11, allowing storage of entire images for anti-aliased lines and basic shading, which proved crucial for handling complex 3D scenes without the limitations of vector persistence.[40] Academic contributions in shading algorithms further refined surface rendering during this period. In 1971, Henri Gouraud introduced an interpolation method for smooth shading of polygonal approximations to curved surfaces, computing intensity at vertices based on surface normals and linearly interpolating across edges and faces to simulate continuity without per-pixel lighting calculations.[41] This Gouraud shading technique significantly improved the visual quality of wireframe-derived models, reducing the faceted appearance common in early polygon renders. Complementing this, Bui Tuong Phong's 1975 work proposed a reflection model incorporating diffuse, specular, and ambient components, along with an interpolation-based shading algorithm that used interpolated normals for more accurate highlight rendering on curved surfaces approximated by polygons.[42] These methods established foundational principles for realistic illumination in 3D graphics, influencing pipeline designs for decades.Major Advancements
The 1980s saw significant progress in rendering techniques and hardware, bridging academic research to practical applications. In 1980, Turner Whitted introduced ray tracing, a method simulating light paths for realistic reflections, refractions, and shadows, which became essential for offline photorealistic rendering despite high computational cost.[43] Hardware advancements included specialized graphics systems from Evans & Sutherland and Silicon Graphics Incorporated (SGI), enabling real-time 3D visualization in professional workstations used for CAD and early CGI in films like Tron (1982), the first major motion picture to feature extensive computer-generated imagery.[44] The 1990s marked a pivotal era for 3D computer graphics with the advent of consumer-grade hardware acceleration, transforming graphics from niche academic and professional tools into accessible technology for gaming and personal computing. In November 1996, 3dfx Interactive released the Voodoo Graphics chipset, the first widely adopted 3D accelerator card that offloaded rendering tasks from the CPU to dedicated hardware, enabling smoother frame rates and more complex scenes in real-time applications like Quake.[45] This innovation spurred the development of the first consumer GPUs, such as subsequent iterations from 3dfx and competitors like NVIDIA's Riva series, which integrated 2D and 3D capabilities on a single board and democratized high-fidelity visuals for millions of users.[46] A landmark milestone came in 1995 with Pixar's Toy Story, the first full-length feature film produced entirely using computer-generated imagery (CGI), rendered via Pixar's proprietary RenderMan software, which implemented advanced ray tracing and shading techniques to achieve photorealistic animation.[47] Entering the 2000s, the field advanced toward greater flexibility and realism through programmable graphics pipelines. Microsoft's DirectX 8, released in November 2000, introduced vertex and pixel shaders, allowing developers to write custom code for transforming vertices and coloring pixels, moving beyond fixed-function hardware to enable effects like dynamic lighting and procedural textures in real time.[48] This programmability, supported by GPUs like NVIDIA's GeForce 3, revolutionized game development and visual effects, facilitating more artist-driven control over rendering outcomes. The 2010s and 2020s witnessed integration with emerging technologies and computational breakthroughs, particularly in real-time global illumination and AI-enhanced workflows. In March 2018, NVIDIA announced RTX technology with the Turing architecture, enabling hardware-accelerated real-time ray tracing on consumer GPUs, which simulates light paths for accurate reflections, refractions, and shadows at interactive speeds, fundamentally elevating graphical fidelity in games and simulations.[49] Complementing this, NVIDIA's OptiX ray tracing engine incorporated AI-accelerated denoising in the early 2020s, using deep learning to remove noise from incomplete ray-traced renders, drastically reducing computation time while preserving detail—often achieving visually clean images in seconds on RTX hardware.[50] Open-source efforts also flourished, exemplified by Blender's Cycles render engine, introduced in 2011 and continually refined through community contributions, which supports unbiased path tracing on CPUs and GPUs, making production-quality rendering freely available and fostering innovations in film, architecture, and scientific visualization.[51] Key milestones included the 2012 Kickstarter launch of the Oculus Rift, which revitalized virtual reality by leveraging stereoscopic 3D graphics and head-tracking for immersive environments, influencing graphics hardware optimizations for low-latency rendering.[52] By 2025, these advancements extended to scientific applications, with AI-accelerated simulations and high-resolution 3D visualizations enhancing climate modeling in platforms like NVIDIA's Earth-2, built on Omniverse, enabling researchers to analyze complex atmospheric interactions with unprecedented accuracy.[53]Core Techniques
3D Modeling
3D modeling involves the creation of digital representations of three-dimensional objects through geometric and topological structures, serving as the foundational step for subsequent processes like animation and rendering. These models define the shape, position, and connectivity of objects in a virtual space using mathematical descriptions that approximate real-world geometry. Common approaches emphasize efficiency in storage, manipulation, and computation, often balancing detail with performance in applications such as computer-aided design and visual effects. Fundamental building blocks in 3D modeling are geometric primitives, which include points (zero-dimensional locations defined by coordinates), lines (one-dimensional connections between points), polygons (two-dimensional faces typically triangular or quadrilateral), and voxels (three-dimensional volumetric elements analogous to pixels in 3D space).[54] These primitives enable the construction of complex shapes; for instance, polygons form the basis of surface models, while voxels support volumetric representations suitable for simulations like medical imaging.[55] One prevalent technique is polygonal modeling, where objects are represented as meshes composed of vertices (position points), edges (connections between vertices), and faces (bounded polygonal regions). This mesh structure allows for flexible topology and is widely used due to its compatibility with hardware-accelerated rendering pipelines. A survey on polygonal meshes highlights their role in approximating smooth surfaces through triangulation or quadrangulation, with applications in geometry processing tasks like simplification and remeshing.[56] For smoother representations, subdivision surfaces refine coarse polygonal meshes iteratively; the Catmull-Clark algorithm, for example, generates limit surfaces that approximate bicubic B-splines on arbitrary topologies by averaging vertex positions across refinement levels.[57] Another important method is digital sculpting, which simulates traditional clay sculpting in a digital environment using brush tools to push, pull, and deform high-resolution meshes. This technique excels at creating intricate organic forms like characters and creatures, often starting from a base mesh and adding detail through dynamic topology adjustments.[58] Curve- and surface-based methods, such as non-uniform rational B-splines (NURBS), provide precise control for freeform shapes. NURBS extend B-splines by incorporating rational weights, enabling exact representations of conic sections and complex geometries like car bodies in CAD systems. Introduced in Versprille's dissertation, NURBS curves are defined parametrically, with the surface form generalizing tensor-product constructions.[59] Spline interpolation underpins these, as seen in Bézier curves, where a curve of degree is given by with as control points and as Bernstein polynomials. This formulation ensures convexity and smooth interpolation between points.[60] To compose complex models from simpler ones, constructive solid geometry (CSG) employs Boolean operations on primitives: union combines volumes, intersection retains overlapping regions, and difference subtracts one from another. Originating in Requicha's foundational work on solid representations, CSG ensures watertight models by operating on closed sets, though it requires efficient intersection computations for practical use. Supporting these techniques are data structures like scene graphs, which organize models hierarchically as directed acyclic graphs with nodes representing objects, transformations, and groups. This allows efficient traversal for rendering and simulation by propagating changes through parent-child relationships.[61] For optimization, bounding volumes enclose models to accelerate queries; axis-aligned bounding boxes (AABBs), defined by min-max coordinates along axes, provide fast intersection tests in collision detection and ray tracing.[62]Animation and Scene Layout
Scene layout in 3D computer graphics involves the strategic arrangement of modeled objects, cameras, lights, and props to construct a coherent virtual environment that supports narrative or functional goals. Once 3D models are created, they are imported and positioned relative to one another, often using object hierarchies to manage complexity; these hierarchies establish parent-child relationships that propagate transformations such as translations and rotations efficiently across assemblies like characters or vehicles.[63] Camera placement is a critical aspect, defining the viewer's perspective and framing, with techniques ranging from manual adjustments to automated methods that optimize viewpoints for hierarchical storytelling or scene comprehension.[64] Environmental setup completes the layout by integrating lights to establish mood and directionality, alongside props that fill space and interact with primary elements, ensuring spatial relationships align with intended dynamics.[65] Animation techniques enable the temporal evolution of these laid-out scenes, transforming static compositions into dynamic sequences. Keyframing remains a foundational method, where animators define discrete poses at specific timestamps, and intermediate positions are generated through interpolation to create fluid motion; this approach draws from traditional animation principles adapted to 3D, emphasizing timing and easing for realism.[66] Linear interpolation provides the basic mechanism for blending between two keyframes and at parameter : This is extended to cubic splines, which ensure continuity for smoother trajectories by fitting piecewise polynomials constrained at endpoints and tangents.[67] For character rigging, inverse kinematics (IK) solves the inverse problem of positioning end effectors—like hands or feet—while computing joint angles to achieve natural poses, contrasting forward kinematics by prioritizing goal-directed control over sequential joint specification. Motion capture (mocap) is another essential technique, involving the recording of real-world movements using sensors or cameras to capture data from actors or objects, which is then applied to digital models for highly realistic animations. This method reduces manual effort and captures nuanced performances, commonly used in film and video games.[68] Procedural animation complements these by algorithmically generating motion without manual keyframing, as in particle systems that simulate dynamic, fuzzy phenomena such as fire or smoke through clouds of independent particles governed by stochastic rules for birth, life, and death.[69] Physics-based simulations integrate realistic motion into animated scenes by modeling interactions under physical laws. Rigid body dynamics applies Newton's second law () to compute accelerations from forces and torques on undeformable objects, enabling collisions and constraints that propagate through hierarchies for believable responses like falling or tumbling. For deformable elements, cloth and soft body simulations employ mass-spring models, discretizing surfaces into point masses connected by springs that resist stretching, shearing, and bending; internal pressures or damping stabilize the system, allowing emergent behaviors like folding or fluttering.[70]Visual Representation
Materials and Texturing
In 3D computer graphics, materials define the intrinsic properties of surfaces to enable realistic or stylized rendering, independent of lighting conditions. Physically-based rendering (PBR) models form the foundation of modern material systems, approximating real-world optical behavior through key parameters such as albedo, roughness, and metallic. Albedo, often represented as a base color texture or factor, specifies the proportion of light reflected diffusely by the surface, excluding specular contributions. Roughness quantifies the irregularity of microscopic surface facets, with values ranging from 0 (perfectly smooth, mirror-like) to 1 (highly diffuse, matte), influencing the spread of specular highlights. The metallic parameter distinguishes between dielectric (non-metallic) and conductor (metallic) materials; for metals, it sets albedo to the material's reflectivity while disabling diffuse reflection, ensuring energy conservation in the model. These parameters adhere to microfacet theory, where surface appearance emerges from billions of tiny facets oriented randomly. Texturing enhances materials by mapping 2D images onto 3D geometry using UV coordinates, which parametrize the surface as a flattened 2D domain typically in the [0,1] range for both U and V axes. UV mapping projects textures onto models by unfolding the 3D surface into this 2D space, allowing precise control over how image details align with geometry features like seams or contours. To handle varying screen distances and reduce aliasing artifacts, mipmapping precomputes a pyramid of progressively lower-resolution texture versions, selecting the appropriate level of detail (LOD) based on the texel's projected size; this minimizes moiré patterns and improves rendering efficiency by sampling fewer texels for distant surfaces.[71] For adding fine geometric detail without increasing polygon count, normal mapping and bump mapping perturb surface normals during shading. Normal maps encode tangent-space normal vectors in RGB channels (with blue typically dominant for forward-facing perturbations), enabling detailed lighting responses like shadows and highlights on flat geometry. Bump mapping, an earlier precursor, uses grayscale height maps to compute approximate normals via finite differences, simulating elevation variations such as wrinkles or grains. Both techniques integrate seamlessly with PBR materials, applying perturbations to the base normal before lighting computations. Procedural textures generate patterns algorithmically at runtime, avoiding the need for stored images and allowing infinite variation. A prominent example is Perlin noise, which creates coherent, organic randomness through gradient interpolation across a grid, ideal for simulating natural phenomena like marble veins or wood grain; higher octaves of noise can be layered for fractal-like complexity. Multilayer texturing extends this by assigning separate maps to PBR channels—diffuse (albedo) for color, specular for reflection intensity and color (in specular/glossiness workflows), and emission for self-glow—often packed into single textures for efficiency, such as combining metallic and roughness into RG channels.[72][73] Texture sampling retrieves color values from these maps using coordinates, typically via the GLSL functiontexture2D(tex, uv), which applies bilinear filtering by linearly interpolating between the four nearest texels to produce smooth, anti-aliased results for non-integer coordinates. This process forms a core step in fragment shaders, where sampled values populate material parameters before final color computation. Materials and texturing thus prepare surfaces for integration into rendering pipelines, such as rasterization, where they inform per-pixel evaluations.
Lighting and Shading
Lighting and shading in 3D computer graphics simulate the interaction between light sources and material surfaces to determine pixel colors, providing visual depth and realism without global light transport computations. Light sources are defined by their position, direction, intensity, and color, while shading models calculate local illumination contributions at surface points based on surface normals and material properties. These techniques form the foundation for approximating realistic appearances in real-time and offline rendering.[42] Common types of light sources include point lights, which emit illumination uniformly in all directions from a fixed position, mimicking small bulbs or candles; directional lights, which send parallel rays from an infinite distance to model sources like sunlight; and area lights, which are extended geometric shapes such as rectangles or spheres that produce softer, more realistic shadows due to their finite size.[74] Point and area lights incorporate distance-based attenuation, following the inverse square law from physics, where light intensity falls off as , with as the distance from the source, to prevent unrealistically bright distant illumination.[75] This attenuation is often implemented as a factor in the shading equation, such as , where constants , , and adjust the falloff curve for artistic control.[75] Shading models break down surface response into components like ambient, diffuse, and specular reflection. The Lambertian diffuse model, ideal for matte surfaces, computes the diffuse intensity as , where is the diffuse reflectivity, the light's intensity, the normalized surface normal, and the normalized vector from the surface to the light; the cosine term accounts for reduced illumination at grazing angles.[42] This model assumes perfectly diffuse reflection, where light scatters equally in all directions, independent of viewer position.[42] Specular reflection adds shiny highlights to simulate glossy materials. The original Phong model calculates specular intensity aswhere is the specular reflectivity, the perfect reflection vector , the normalized view direction, and the shininess exponent controlling highlight sharpness; higher produces tighter, more mirror-like reflections.[42] The full Phong shading combines ambient , diffuse, and specular terms, often clamped to [0,1] and multiplied by material color.[42] The Blinn-Phong model refines specular computation for efficiency by using a half-vector approximation: compute , then . This avoids explicit reflection vector calculation, reducing operations per vertex or fragment while closely approximating Phong highlights, especially for low .[76] Blinn-Phong remains widely used in real-time graphics due to its balance of quality and performance.[76] For materials beyond opaque surfaces, subsurface scattering models light penetration and re-emission in translucent substances like wax or human skin. The dipole model approximates this by placing virtual point sources inside the material to solve diffusion equations, enabling efficient computation of blurred, soft appearances from internal scattering.[77] This approach captures effects like color bleeding and forward scattering, validated against measured data for materials such as marble and milk.[77] Volumetric lighting extends shading to media like atmosphere or fog, where light scatters within volumes to form visible beams known as god rays or crepuscular rays. These are simulated by sampling light density along rays from the camera through occluders, using techniques like ray marching to integrate scattering contributions and accumulate transmittance for realistic atmospheric effects.[78] In practice, post-processing methods project light sources onto a buffer, apply radial blurring, and composite with depth to achieve real-time performance.[78]