Hubbry Logo
Virtual actorVirtual actorMain
Open search
Virtual actor
Community hub
Virtual actor
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Virtual actor
Virtual actor
from Wikipedia

A virtual actor or also known as virtual human, virtual persona, or digital clone is the creation or re-creation of a human being in image and voice using computer-generated imagery and sound, that is often indistinguishable from the real actor.

The idea of a virtual actor was first portrayed in the 1981 film Looker, wherein models had their bodies scanned digitally to create 3D computer generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used this concept: Fools by Pat Cadigan, and Et Tu, Babe by Mark Leyner.

In general, virtual humans employed in movies are known as synthespians, virtual actors, cyberstars, or "silicentric" actors. There are several legal ramifications for the digital cloning of human actors, relating to copyright and personality rights. People who have already been digitally cloned as simulations include Bill Clinton, Marilyn Monroe, Fred Astaire, Ed Sullivan, Elvis Presley, Bruce Lee, Audrey Hepburn, Anna Marie Goddard, and George Burns.[1][2]

By 2002, Arnold Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and David Duchovny had all had their heads laser scanned to create digital computer models thereof.[1]

Early history

[edit]

Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick Jagger's song "Hard Woman" (from She's the Boss). The first actual human beings to be digitally duplicated were Marilyn Monroe and Humphrey Bogart in a March 1987 film "Rendez-vous in Montreal" created by Nadia Magnenat Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Institute of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café in Montreal, Quebec, Canada. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands.[3]

In 1987, the Kleiser-Walczak Construction Company (now Synthespian Studios), founded by Jeff Kleiser and Diana Walczak coined the term "synthespian" and began its Synthespian ("synthetic thespian") Project, with the aim of creating "life-like figures based on the digital animation of clay models".[2][4]

In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and performed live at SIGGRAPH. In 1989, The Abyss, directed by James Cameron included a computer-generated face placed onto a watery pseudopod.[3][5]

In 1991, Terminator 2: Judgment Day, also directed by Cameron, confident in the abilities of computer-generated effects from his experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer graphics. Terminator 2: Judgment Day contained over forty shots throughout the film.[3][5][6]

In 1997, Industrial Light & Magic worked on creating a virtual actor that was a composite of the bodily parts of several real actors.[2]

In 2000, Microsoft Research published an article by Gordon Bell and Jim Gray, titled "Digital immortality."[7][8] The authors worked on the system called MyLifeBits to create a "digital clone" of a person from digital data.[9]

21st century

[edit]

By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans had been used in Final Fantasy: The Spirits Within, and by 2004, a synthetic Laurence Olivier co-starred in Sky Captain and the World of Tomorrow.[10][11]

Star Wars

[edit]

Since the mid-2010s, the Star Wars franchise has become particularly notable for its prominent usage of virtual actors, driven by a desire in recent entries to reuse characters that first appeared in the original trilogy during the late 1970s and early 1980s.

The 2016 Star Wars Anthology film Rogue One: A Star Wars Story is a direct prequel to the 1977 film Star Wars: A New Hope, with the ending scene of Rogue One leading almost immediately into the opening scene of A New Hope. As such, Rogue One called for Industrial Light & Magic to make digital recreations of certain characters so they would look the same as they did in A New Hope, specifically the roles of Peter Cushing as Grand Moff Tarkin (played and voiced by Guy Henry) and Carrie Fisher as Princess Leia (played by Ingvild Deila and voiced by an archive recording of Fisher). Cushing had died in 1994, while Fisher was not available to play Leia during production and died a few days after the film's release.[12]

Similarly, the 2020 second season of The Mandalorian briefly featured a digital recreation of Mark Hamill's character Luke Skywalker (played by an uncredited body double and voiced by an audio deepfake recreation of Hamill's voice[citation needed]) as portrayed in the 1983 film Return of the Jedi. Canonically, The Mandalorian's storyline takes place roughly five years after the events of Return of the Jedi.

[edit]

Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was supposedly preserving: our point of contact with the irreplaceable, finite person". Even more problematic are the issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the United States, for instance, they must resort to database protection laws in order to exercise what control they have (The proposed Database and Collections of Information Misappropriation Act would strengthen such laws). An actor does not own the copyright on their digital clones, unless the clones were created by them. Robert Patrick, for example, would not have any legal control over the liquid metal digital clone of himself that was created for Terminator 2: Judgment Day.[10][13]

The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs. It is also a career difficulty, since a clone could be used in roles that a real actor would not accept for various reasons. Both Tom Waits and Bette Midler have won actions for damages against people who employed their images in advertisements that they had refused to take part in themselves.[14]

In the USA, the use of a digital clone in advertisements is required to be accurate and truthful (section 43(a) of the Lanham Act and which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied endorsement. The United States District Court for the Southern District of New York held that an advertisement employing a Woody Allen impersonator would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product.[14]

Other concerns include posthumous use of digital clones. Even before Brandon Lee was digitally reanimated, the California Senate drew up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich[15] and Vincent Price.[2]

In fiction

[edit]

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A virtual actor is a computer-generated entity designed to simulate performance in films, television, advertisements, video games, and other media, employing techniques such as CGI, , algorithms, and to replicate facial expressions, body movements, and vocal delivery without relying on a physical performer. Virtual actors emerged prominently in the with early CGI applications in blockbusters like Terminator 2: Judgment Day (1991), where liquid metal morphing effects foreshadowed fully autonomous digital performers, but advanced rapidly post-2010 due to improvements in and real-time rendering. Key examples include posthumous recreations such as Carrie Fisher's portrayal of in Rogue One (2016) using archival footage and facial mapping, Paul Walker's completion of Furious 7 (2015) via his brothers' and CGI overlay, and de-aged depictions like Samuel L. Jackson in Captain Marvel (2019). These applications have enabled narrative continuity for franchises and reduced production costs by obviating scheduling conflicts or aging limitations inherent to actors, though they have sparked controversies over rights, consent for likeness usage, and potential displacement of living performers. The highlighted tensions, with actors demanding contractual safeguards against unauthorized AI replicas that could undermine residuals and employment, reflecting broader causal dynamics where technological efficiency collides with labor economics in creative industries. Despite ethical debates, virtual actors continue to proliferate, as seen in fully AI-generated characters in advertisements and virtual influencers like Lil Miquela, which have amassed millions of followers through scripted interactions indistinguishable from content.

History

Early developments

The first documented use of (CGI) in a appeared in (1973), directed by , where at the produced pixelated facial distortions on robotic characters to simulate overheating malfunctions. This 2D technique, inspired by NASA imagery processing for Mars flybys, represented rudimentary digital manipulation integrated with live-action footage, though it lacked three-dimensional modeling or animation. Advancements accelerated in the 1980s with (1981), another Crichton film that depicted the concept of virtual actors through the first 3D-shaded CGI human model, "Cindy," generated by manually measuring and digitizing an actress's body proportions for use in holographic advertisements. This wireframe figure, rendered on early computers, foreshadowed synthetic performers replacing models, emphasizing efficiency in commercial production over realism. By 1985, featured the earliest fully CGI character—a photorealistic stained-glass —interacting seamlessly with live actors in a ten-second sequence, animated by using custom software for shattering effects and motion. These innovations built on abstract experiments from the 1940s by pioneers like John Whitney but shifted toward practical cinematic applications, laying groundwork for digital entities despite hardware limitations like low resolution and high computational costs. Early efforts prioritized proof-of-concept over lifelike performance, with CGI actors appearing stiff and non-expressive compared to practical effects.

21st-century advancements

In the early 2000s, advancements in (CGI) enabled the creation of fully digital human characters, marking a shift toward photorealistic virtual actors. The 2001 film Final Fantasy: The Spirits Within featured an entirely CGI cast, including the Aki Ross, intended as the first photorealistic computer-animated actress, with from live actors refined through software like Maya for blocking and tweaks. Despite criticisms of the effect, the production demonstrated feasible full-length rendering of synthetic performers using for skin and detailed facial rigging. Concurrently, hybrid techniques emerged, as in Gladiator (2000), where Oliver Reed's unfinished scenes were completed by superimposing a digital face onto a body, with animated mouth movements synced to dialogue from outtakes. Motion capture technology advanced significantly by mid-decade, allowing virtual actors to derive lifelike performances from human input. In The Lord of the Rings: The Two Towers (2002), Andy Serkis's portrayal of utilized pioneering performance capture, recording scenes three times—once with Serkis alone, once with actors reacting to him, and once with a marker—then mapping the data to a CGI model via Weta Digital's proprietary software for 73 minutes of effects across 799 shots. This approach integrated facial expressions, body language, and voice to create a character blending human nuance with digital exaggeration, influencing subsequent non-human virtual roles like the Na'vi in Avatar (2009), though focused on enhanced realism over full human recreation. The 2010s saw refined digital resurrection and de-aging, leveraging archival scans, motion capture dots, and facial superimposition for deceased or altered actors. TRON: Legacy (2010) de-aged Jeff Bridges by combining 1980s film scans with 134-point motion capture on a stunt double to generate a younger Clu. Similar methods completed Paul Walker's role in Furious 7 (2015) using brothers as body doubles with digitized faces for dialogue scenes. By 2016, Rogue One: A Star Wars Story recreated Peter Cushing's Grand Moff Tarkin (deceased 1994) by having actor Guy Henry perform the role, then replacing his face with CGI modeled from original footage and life-casts, addressing challenges like lower-body data scarcity through integrated animation. These techniques, reliant on pre-deep learning CGI pipelines, expanded virtual actors' utility in mainstream cinema while raising early debates on consent and likeness rights.

AI-driven evolution since 2020

Since 2020, has advanced virtual actor capabilities through enhanced generative models for voice cloning, facial animation, and behavioral simulation, enabling synthetic performers that approximate human nuance with greater fidelity. Companies like deployed AI-driven speech-to-speech synthesis to recreate the voice of a young in Disney's The Mandalorian season 2 finale, aired in 2022, by training neural networks on archival audio from actor Mark Hamill's early performances. This marked an early high-profile application of AI for resurrecting deceased or aged characters without live recording, reducing production costs and logistical constraints while raising questions about consent for likeness replication. Parallel progress in avatar technology has produced customizable virtual actors for scripted content, with Synthesia introducing its platform in summer 2020 and iteratively refining AI avatars via diffusion-based models for lip-sync and gesture generation. By September 2025, Synthesia's Express-2 engine incorporated advanced voice and full-body expressiveness, allowing avatars to mimic natural mannerisms from input text or scripts, expanding use in corporate videos and training simulations. These tools leverage large datasets of human to train models that generate coherent performances, outperforming pre-2020 rule-based in emotional range and adaptability. Industry adoption accelerated amid the , where performers negotiated protections against unauthorized digital replicas, culminating in a with AI usage guidelines including consent requirements and compensation for synthetic likenesses. By August 2025, platforms like Iconic Media launched fully AI-generated actors for modeling and , supporting diverse content types via replicated human appearances and voices derived from generative adversarial networks. Such developments, grounded in scalable compute for training on vast media corpora, have shifted virtual actors from supplementary CGI to viable standalone entities, though empirical validation of long-term realism remains tied to ongoing benchmarks in perceptual quality.

Technical Foundations

Creation and animation techniques

Virtual actors are created through a pipeline beginning with high-fidelity of real human subjects to capture geometric details, such as facial topology and body proportions, often using structured light or systems for photorealistic meshes. This process generates dense models, typically exceeding 100,000 vertices for faces, which are then refined via to optimize for while preserving anatomical accuracy. Rigging follows, involving the construction of an internal skeletal hierarchy—comprising bones for limbs, spine, and —devised to mimic human ; advanced methods include data-driven rigs that adapt to scanned geometry using blend shapes for expressions. Skinning binds the mesh to this rig via techniques like linear blend (LBS), where vertex weights determine deformation influence from multiple bones, though newer approaches incorporate normal-constrained decomposition to minimize artifacts like unnatural folds during facial . Animation techniques predominantly rely on motion capture (mocap), which records performer movements using optical systems with reflective markers tracked by infrared cameras at 120-240 frames per second, translating data into joint rotations for the virtual rig. Inertial measurement units (IMUs) embedded in suits provide markerless alternatives, capturing acceleration and orientation for full-body tracking with sub-millimeter accuracy in controlled environments, reducing setup time compared to optical methods. Facial capture employs specialized headsets with dense marker arrays or photometric stereo to drive muscle-based simulations, enabling micro-expressions; for instance, projects like Digital Ira transfer scanned performances to bone-driven rigs via weight solvers on 4K meshes. Hybrid approaches layer mocap with physics-based simulations for cloth and hair dynamics, ensuring causal interactions like gravity and collisions. Emerging AI-driven methods augment traditional techniques by generating animations from video inputs, using neural networks to estimate poses without markers—such as pose estimation models trained on large datasets to infer 3D keypoints from 2D footage with 95% accuracy in joint prediction. These systems, like those employing diffusion models or transformers, enable procedural synthesis of behaviors, including cycles or emotional gestures, by interpolating from reference mocap libraries, though they require validation against empirical human data to avoid distortions. Video-to-animation pipelines process raw footage in seconds, outputting rigged sequences compatible with engines like Unreal, but outputs often necessitate manual cleanup for production fidelity due to limitations in handling occlusions or complex interactions. Overall, these techniques prioritize empirical fidelity to real , with validation through perceptual studies showing mocap outperforming keyframing in realism scores by up to 30% in viewer assessments.

Voice and behavioral synthesis

Voice synthesis for virtual actors primarily relies on neural text-to-speech (TTS) systems, which employ algorithms to generate human-like speech from textual input by modeling acoustic patterns and prosody from large datasets of recorded human voices. These systems, including models like Tacotron 2 for prediction and for waveform generation, produce outputs with natural intonation, emotional inflection, and accents, surpassing earlier concatenative methods that stitched pre-recorded clips. In applications for digital characters, such as in film dubbing or animated avatars, voice cloning techniques further enable replication of specific performers' timbres from mere minutes of source audio, facilitating seamless integration without requiring full re-recordings. For instance, tools like those from have been utilized to synthesize character dialogues in , preserving original vocal nuances while adapting to new scripts or languages. Behavioral synthesis complements voice generation by using AI-driven models to simulate realistic physical and emotional responses in virtual actors, often drawing from motion capture data augmented with for . Frameworks incorporating neural networks and cognitive architectures, such as fuzzy cognitive maps or the Ortony-Clore-Collins (OCC) model for appraisal-based emotions, allow digital entities to exhibit contextually appropriate gestures, facial expressions, and interactions autonomously. This synthesis enables virtual actors to adapt behaviors dynamically, for example, by learning from performance datasets to generate micro-movements or reactive sequences that align with narrative demands, reducing reliance on manual keyframing. In practice, AI algorithms process inputs like environmental cues or triggers to output believable , as seen in systems where synthetic minds interpret roles through layered psychological simulations. Integration of voice and behavioral synthesis achieves synchronized performances in virtual actors, with technologies like Azure's TTS avatars employing neural networks to align lip movements, eye gazes, and prosodic timing for lifelike talking heads. Deep learning facilitates emotional coherence, where vocal tone modulates behavioral outputs—such as heightened gestures during expressive speech—via multimodal models trained on synchronized audio-visual data. Challenges persist in achieving sub-millisecond latency for real-time applications and avoiding the uncanny valley through precise causal modeling of human variability, though advancements in edge-deployable systems like NVIDIA Riva have improved scalability for CGI productions as of 2025. Empirical evaluations indicate that such syntheses enhance perceived realism when grounded in empirical motion datasets rather than rule-based heuristics, with studies showing improved viewer engagement in virtual interactions.

Applications in Media

Film and television

Virtual actors in film and television emerged as a practical solution for completing productions disrupted by an actor's death, initially relying on body doubles combined with rudimentary CGI face replacements and archival footage. In Gladiator (2000), after Oliver Reed died during filming, his character Proximo's remaining scenes were finished by superimposing a digital version of his face onto a body double's performance. Similarly, in the HBO series The Sopranos (2001 episode), Nancy Marchand's Livia Soprano appeared in a dream sequence via CGI attachment of her head to another actress's body following her passing. Advancements in the mid-2000s enabled posthumous recreations using archived material. Sky Captain and the World of Tomorrow (2004) revived Laurence Olivier's likeness from and digital manipulation for a villainous role, marking one of the earliest high-profile uses of such technology in a live-action feature. (2006) incorporated Marlon Brando's image and synthesized voice from 1978 outtakes into a holographic scene, demonstrating voice synthesis alongside visual integration. By the 2010s, motion capture and detailed facial scanning allowed more seamless digital doubles. In Furious 7 (2015), Paul Walker's incomplete scenes as Brian O'Conner were supplemented by his brothers acting as stand-ins, with Walker's face mapped via CGI and his voice reconstructed from prior recordings. Rogue One: A Star Wars Story (2016) controversially resurrected Peter Cushing's Grand Moff Tarkin using motion-captured performance from actor Guy Henry, overlaid with Cushing's scanned likeness from historical photos, while a young Princess Leia was digitally rendered from Carrie Fisher's 1977 footage. Recent applications incorporate and AI-driven de-aging for living actors or further posthumous roles. In The Mandalorian season 2 finale (2020), a youthful was portrayed through CGI enhancement of actor Max Lloyd's motion-captured performance, augmented by deepfake elements and Mark Hamill's voice work to evoke the character's appearance circa Return of the Jedi. These techniques, while enabling narrative continuity, have sparked debates over ethical likeness usage, as seen in ongoing projects like the planned digital casting of in Back to Eden.

Video games and advertising

In video games, virtual actors primarily function as AI-generated non-player characters (NPCs) that exhibit emergent behaviors, dynamic dialogues, and adaptive performances beyond traditional scripting. Studios have experimented with generative AI to power NPCs capable of real-time voice synthesis and interaction, allowing for vast numbers of unique conversational lines without extensive human recording sessions; for example, tools from Studios enable to support unlimited character voices, as adopted by select developers amid SAG-AFTRA negotiations in 2024. A July 2025 demonstration illustrated this by having AI NPCs respond vocally to a player's inputs, including reactions when their code was altered mid-game, highlighting potential for unscripted "acting" in immersive environments. Reports indicate that , anticipated for release in 2026, may incorporate AI-driven NPCs with unscripted logic for more realistic crowd and individual behaviors, marking a shift toward synthetic performers that simulate human-like agency. This application reduces production costs by minimizing reliance on human motion capture and voice talent, though it has sparked labor disputes over consent for AI training data derived from actors' performances. Early implementations, such as generative AI NPCs tested in prototypes by 2024, demonstrate causal advantages in —enabling games with thousands of distinct "actors" versus the dozens feasible with manual scripting—but raise realism limits, as current models often produce inconsistent or repetitive outputs under scrutiny. In , virtual actors serve as fully synthetic digital humans programmed to deliver scripted messages, endorsements, or narratives in commercials, bypassing logistical challenges of and filming live performers. AI-generated entities have facilitated multilingual campaigns by altering appearances and voices for localized versions without reshooting, as seen in commercial productions leveraging tools for rapid variant creation by early 2025. Brands employ these for cost savings—estimated at up to 90% reduction in talent and production expenses for short-form ads—and customization, such as generating diverse ethnic representations or reviving likenesses of deceased figures, though the latter invites legal scrutiny over . Examples include AI commercial actors in TV spots that mimic human expressiveness via neural rendering, enabling infinite iterations for ; by mid-2025, this extended to personalized ads where viewer data informs subtle trait adjustments in the virtual performer. Such techniques prioritize efficiency over authenticity, with empirical tests showing comparable viewer engagement to human-led ads when visuals achieve , but disclosures remain rare, potentially eroding trust if undetected. Ethical deployment emphasizes transparency to mitigate deception risks, aligning with industry pushes for labeled synthetic content amid rising concerns.

Intellectual property rights

The intellectual property rights associated with virtual actors primarily revolve around the right of publicity, which safeguards an individual's name, image, likeness, and voice against unauthorized commercial exploitation, alongside protections for the underlying digital assets and performances. In the United States, the right of publicity is governed by state laws rather than federal statute, creating variability; for instance, recognizes both pre- and post-mortem rights, allowing estates to control digital recreations of deceased performers for up to 70 years after death. This framework has been tested in cases involving virtual recreations, such as No Doubt v. Publishing, Inc. (2011), where a court upheld the band's right to prevent unauthorized use of their digitized likenesses in a , affirming that performers retain control over digital doubles even in licensed contexts. Recent advancements in AI have prompted legislative expansions to address digital replicas of actors, both living and deceased. California's AB 1836, enacted in September 2024, mandates written consent from the rights holder—typically the estate—for using AI to generate a deceased performer's likeness, voice, or performance in digital replicas, effective January 1, 2025, to prevent unauthorized "digital " in and media. Similarly, New York's Digital Replicas Contract Act, effective in 2025, extends performers' to require explicit contracts for AI-generated voice and likeness uses, providing unions like with leverage to negotiate protections during productions. Tennessee's ELVIS Act (2024) specifically targets voice cloning, prohibiting unauthorized AI reproductions of singers' voices, as seen in protections for estates like Elvis Presley's. At the federal level, the proposed NO FAKES Act seeks to establish a nationwide for AI-generated replicas, allowing individuals or estates to sue for damages from unauthorized digital likenesses, though it remains pending as of 2025. For purely synthetic virtual actors not derived from real individuals—such as those created via or original design—IP protection falls under law as audiovisual works, software, or derivative content, with creators or studios holding exclusive rights to reproduction and distribution. However, disputes arise when AI training data incorporates unauthorized scans or performances of real actors, potentially infringing copyrights on source materials, as highlighted in ongoing SAG-AFTRA negotiations for "digital replica" clauses in contracts to ensure performers receive residuals for likeness uses. Courts have clarified that raw likeness or voice elements are not copyrightable independently but gain protection when embedded in larger works, complicating claims against AI outputs unless publicity rights are invoked. Internationally, jurisdictions like have ruled in AI cases that unauthorized voice or portrait violates portrait rights and unfair competition laws, as in a 2025 Beijing Internet Court decision affirming platform liability for generative AI misuse. These developments underscore a patchwork of protections prioritizing and estate control, though gaps persist for non-commercial or transformative uses, prompting calls for harmonized federal standards to balance innovation with performer rights. In the United States, the NO FAKES Act, introduced in 2024 and advancing through congressional discussions into 2025, seeks to establish a federal right of publicity protecting individuals' voices and likenesses from unauthorized AI-generated replicas, including deepfakes used to create virtual actors without explicit . This addresses gaps in state-level protections by imposing civil liability on creators and distributors of such content, with exceptions for , , or transformative works, aiming to prevent exploitation of performers' digital likenesses in , , and other media. At the state level, enacted AB 2013 in October 2025, expanding the definition of "likeness" under its right of publicity statute to explicitly include AI-generated images, videos, and three-dimensional representations, requiring written consent from performers—or their estates for posthumous use—before creating or distributing digital replicas for commercial purposes such as virtual roles. This applies to both union and non-union actors, marking the first state-specific measure targeting AI deepfakes of performers and imposing penalties including damages and injunctive relief for violations. Similarly, 's 2023-2026 contracts mandate for digital replicas, including provisions for performers to suspend authorization during strikes and detailed disclosures on how AI will replicate their performance, voice, or appearance. These union agreements have influenced industry practices, as seen in OpenAI's October 2025 policy updates restricting Sora-generated deepfakes of actors like following advocacy. In the , the AI Act, effective from August 2024 with phased implementation through 2025, classifies s as high-risk AI systems requiring transparency measures such as watermarking and disclosure of synthetic content, but consent obligations primarily stem from GDPR for processing in training models, including biometric likenesses for virtual actors. Member states enforce under national laws, which prohibit unauthorized commercial use of an individual's image or voice, with fines up to 4% of global turnover for GDPR breaches involving non-consensual creation. However, varies, with calls for harmonized consent frameworks to address cross-border virtual actor applications. The TAKE IT DOWN Act, signed into U.S. in May 2025, compels online platforms to remove non-consensual intimate deepfakes within 48 hours of notification, providing a mechanism to curb harmful virtual depictions though not exclusively tailored to entertainment contexts. Critics argue these regulations lag behind AI capabilities, potentially insufficient against anonymous or offshore deepfake production, while proponents emphasize their role in upholding performer autonomy and preventing economic displacement.

Ethical and Economic Debates

Labor market impacts

The integration of virtual actors into film and television production has prompted fears of widespread job displacement among human performers, particularly for roles involving extras, stunt work, and deceased actors' likenesses recreated via deepfakes or CGI. In the , which lasted 118 days and concluded on November 9, actors secured contractual safeguards against unauthorized digital replicas, mandating consent, compensation, and watermarking for AI-generated content to mitigate threats to livelihoods. Despite these measures, performers continue to view synthetic actors as a persistent , enabling studios to bypass hiring for repetitive or hazardous scenes at lower costs. Empirical projections underscore potential labor market contraction: a January 2024 analysis estimated that generative AI, including tools for virtual performances, could disrupt 62,000 jobs in California's sector by 2027, with and —key enablers of virtual actors—facing the highest exposure. Surveys of industry leaders reveal that 75% of firms reported AI-driven reductions or eliminations in roles by mid-2024, often shifting demand from on-set talent to algorithmic generation. Union reports, such as the Animation Guild's assessment of 21,300 affected positions, predict transitional disruptions rather than total elimination, but emphasize vulnerabilities for non-lead actors lacking . Offsetting these effects, virtual actor development has spurred demand for new specialties like artistry and AI ethics compliance, with firms such as Deep Voodoo exemplifying roles in refinement. documentation notes that while AI may erode "menial" production jobs, it could augment creative workflows, potentially expanding overall industry output if reskilling occurs. Nonetheless, indicates uneven distribution: gains accrue to technical experts, while traditional performers face structural obsolescence without adaptive policy interventions. Full replacement of human actors by virtual actors remains unlikely in the near term, particularly for lead acting roles, which remain predominantly human because directors emphasize the unique value of human performances—such as nuanced emotional expression—and audiences crave authentic star power and emotional depth that AI struggles to replicate convincingly, owing to persistent audience demand for the authenticity and emotional depth derived from human performances, which AI cannot fully replicate due to its lack of genuine lived experiences and subtle nuances. Union barriers, reinforced by SAG-AFTRA's protections against unauthorized replicas, further constrain substitution, with most experts viewing comprehensive displacement as decades away given current technical and creative limitations.

Broader ethical considerations

The deployment of virtual actors raises profound questions about the erosion of perceptual trust in visual media, as audiences may struggle to distinguish synthetic performances from authentic human ones, potentially fostering widespread toward all audiovisual content. Empirical studies indicate that exposure to undisclosed deepfakes can reduce belief in genuine videos by up to 20-30% in controlled experiments, amplifying a "liar's " where real events are dismissed as fabricated. This blurring of boundaries challenges causal realism in , where viewers' reliance on visual cues for truth assessment is undermined, leading to broader societal vulnerabilities in discerning from . Philosophically, virtual actors interrogate the intrinsic value of human authenticity in performative arts, positing that machine-generated expressions lack the irreducible and emotional depth derived from biological , thereby diluting the epistemic and aesthetic integrity of . Critics argue this substitution commodifies identity without reciprocal agency, as synthetic replicas—trained on real performers' —exploit human sans moral reciprocity, echoing first-principles concerns over unearned derivation in intellectual labor. While proponents highlight non-deceptive applications, such as disclosed archival recreations, the normative wrong lies not solely in factual inaccuracy but in the systemic incentivization of perceptual manipulation, which habituates societies to engineered narratives over organic ones. On a meta-level, the unchecked proliferation of virtual actors risks normalizing a post-truth , where empirical verification yields to algorithmic plausibility, exacerbating cognitive biases like confirmation-seeking and diminishing incentives for rigorous source scrutiny. Academic analyses, often from institutionally biased outlets, underemphasize these risks in favor of technological optimism, yet data from misinformation propagation models reveal that even benign entertainment correlate with heightened public cynicism toward institutional narratives, with surveys post-2020 incidents showing 15-25% drops in media trust metrics. Thus, ethical prudence demands transparency protocols, not mere regulatory afterthoughts, to preserve causal chains linking to belief.

Cultural Representations

Depictions in fiction

In the 1981 science fiction thriller Looker, directed by , the narrative centers on a technology that scans live models' bodies to generate 3D holographic projections used as virtual performers in television advertisements, highlighting early fictional concerns over digital replication supplanting human actors. The film, released on October 30, 1981, portrays these holograms as flawless, controllable substitutes that eliminate the need for on-set presence, while also delving into corporate exploitation and assassination plots tied to the invention. The 2002 satirical film S1mOne, written and directed by , features a Hollywood director, Viktor Taransky (played by ), who develops software to create Simone, a fully synthetic digital after his lead star quits a production. Released on August 30, 2002, the story examines Simone's rapid ascent to global stardom through fabricated public appearances and award wins, ultimately exposing the deception when Taransky struggles to maintain the illusion amid demands for her real-world presence. The film critiques fame's superficiality and the ethical perils of indistinguishable virtual personas deceiving audiences and industry insiders. Black Mirror's anthology series has recurrently explored virtual actors through . In the 2013 episode "" (Season 2, Episode 1, aired February 11, 2013), a grieving woman uses an AI service to reconstruct her deceased partner's personality from data, resulting in a synthetic android that mimics his behaviors and speech patterns in interpersonal "performances." This portrayal underscores the emotional manipulation and realism of such recreations, as the clone fails to fully replicate human depth despite initial convincing interactions. The 2017 episode "USS Callister" (Season 4, Episode 1, aired December 29, 2017) depicts a , Robert Daly, who scans his colleagues' likenesses to spawn conscious digital clones within a modeled after , forcing them to act as subservient crew members in scripted adventures. The clones, trapped in the simulation, exhibit agency and suffer psychologically from their coerced roles, illustrating dystopian abuses of virtual acting for personal and power fantasies. This narrative critiques the violation of consent in digital replication, with the clones rebelling to escape their performative confines. Earlier television depictions include the 1985 miniseries , where the titular character emerges as a glitch-evolved AI broadcaster who "performs" as a snarky host across a dystopian media landscape dominated by conglomerates. Originating from a British telefilm aired , 1985, and adapted into an ABC series from March 31 to May 5, 1987, Max Headroom satirizes commercial television's artificiality, portraying the virtual host as a disruptive, self-aware entity challenging human anchors' authenticity.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.