Hubbry Logo
Emotion classificationEmotion classificationMain
Open search
Emotion classification
Community hub
Emotion classification
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Emotion classification
Emotion classification
from Wikipedia
Colored intaglio prints by Charles Le Brun and J. Pass depicting the facial expressions of sixteen emotions

Emotion classification is the means by which one may distinguish or contrast one emotion from another. It is a contested issue in emotion research and in affective science.

Emotions as discrete categories

[edit]

In discrete emotion theory, all humans are thought to have an innate set of basic emotions that are cross-culturally recognizable. These basic emotions are described as "discrete" because they are believed to be distinguishable by an individual's facial expression and biological processes.[1] Theorists have conducted studies to determine which emotions are basic. A popular example is Paul Ekman and his colleagues' cross-cultural study of 1992, in which they concluded that the six basic emotions are anger, disgust, fear, happiness, sadness, and surprise.[2] Ekman explains that there are particular characteristics attached to each of these emotions, allowing them to be expressed in varying degrees in a non-verbal manner.[3][4] Each emotion acts as a discrete category rather than an individual emotional state.[5]

Simplicity debate

[edit]

Humans' subjective experience is that emotions are clearly recognizable in ourselves and others. This apparent ease of recognition has led to the identification of a number of emotions that are said to be basic, and universal among all people. However, a debate among experts has questioned this understanding of what emotions are. There has been recent discussion of the progression on the different views of emotion over the years.[6]

On "basic emotion" accounts, activation of an emotion, such as anger, sadness, or fear, is "triggered" by the brain's appraisal of a stimulus or event with respect to the perceiver's goals or survival. In particular, the function, expression, and meaning of different emotions are hypothesized to be biologically distinct from one another. A theme common to many basic emotions theories is that there should be functional signatures that distinguish different emotions: we should be able to tell what emotion a person is feeling by looking at his or her brain activity and/or physiology. Furthermore, knowledge of what the person is seeing or the larger context of the eliciting event should not be necessary to deduce what the person is feeling from observing the biological signatures.[5]

On "constructionist" accounts, the emotion a person feels in response to a stimulus or event is "constructed" from more elemental biological and psychological ingredients. Two hypothesized ingredients are "core affect" (characterized by, e.g., hedonic valence and physiological arousal) and conceptual knowledge (such as the semantic meaning of the emotion labels themselves, e.g., the word "anger"). A theme common to many constructionist theories is that different emotions do not have specific locations in the nervous system or distinct physiological signatures, and that context is central to the emotion a person feels because of the accessibility of different concepts afforded by different contexts.[7]

Dimensional models of emotion

[edit]

For both theoretical and practical reasons, researchers define emotions according to one or more dimensions. In his 1649 philosophical treatise, The Passions of the Soul, Descartes defines and investigates the six primary passions (Wonder, love, hate, desire, joy, and sadness). In 1897, Wilhelm Max Wundt, the father of modern psychology, proposed that emotions can be described by three dimensions: "pleasurable versus unpleasurable", "arousing or subduing", and "strain or relaxation".[8] In 1954, Harold Schlosberg named three dimensions of emotion: "pleasantness–unpleasantness", "attention–rejection" and "level of activation".[9]

Dimensional models of emotion attempt to conceptualize human emotions by defining where they lie in two or three dimensions. Most dimensional models incorporate valence and arousal or intensity dimensions. Dimensional models of emotion suggest that a common and interconnected neurophysiological system is responsible for all affective states.[10] These models contrast theories of basic emotion, which propose that different emotions arise from separate neural systems.[10] Several dimensional models of emotion have been developed, though there are just a few that remain as the dominant models currently accepted by most.[11] The two-dimensional models that are most prominent are the circumplex model, the vector model, and the Positive Activation – Negative Activation (PANA) model.[11]

Circumplex model

[edit]
Circumplex model of emotion has two axes: Valence and Arousal

The circumplex model of emotion was developed by James Russell.[12] This model suggests that emotions are distributed in a two-dimensional circular space, containing arousal and valence dimensions. Arousal represents the vertical axis and valence represents the horizontal axis, while the center of the circle represents a neutral valence and a medium level of arousal.[11] In this model, emotional states can be represented at any level of valence and arousal, or at a neutral level of one or both of these factors. Circumplex models have been used most commonly to test stimuli of emotion words, emotional facial expressions, and affective states.[13]

Russell and Lisa Feldman Barrett describe their modified circumplex model as representative of core affect, or the most elementary feelings that are not necessarily directed toward anything. Different prototypical emotional episodes, or clear emotions that are evoked or directed by specific objects, can be plotted on the circumplex, according to their levels of arousal and pleasure.[14]

Vector model

[edit]
The vector model of emotion suggests that emotions are structured in terms of arousal and valence. A positive valence represents appetitive motivation and negative valence represents defensive motivation.[15]

The vector model of emotion appeared in 1992.[16] This two-dimensional model consists of vectors that point in two directions, representing a "boomerang" shape. The model assumes that there is always an underlying arousal dimension, and that valence determines the direction in which a particular emotion lies. For example, a positive valence would shift the emotion up the top vector and a negative valence would shift the emotion down the bottom vector.[11] In this model, high arousal states are differentiated by their valence, whereas low arousal states are more neutral and are represented near the meeting point of the vectors. Vector models have been most widely used in the testing of word and picture stimuli.[13]

Positive activation – negative activation (P.A.N.A.) model

[edit]

The positive activation – negative activation (P.A.N.A.) or "consensual" model of emotion, originally created by Watson and Tellegen in 1985,[17] suggests that positive affect and negative affect are two separate systems. Similar to the vector model, states of higher arousal tend to be defined by their valence, and states of lower arousal tend to be more neutral in terms of valence.[11] In the P.A.N.A. model, the vertical axis represents low to high positive affect and the horizontal axis represents low to high negative affect. The dimensions of valence and arousal lay at a 45-degree rotation over these axes.[17]

Plutchik's model

[edit]

Robert Plutchik offers a three-dimensional model that is a hybrid of both basic-complex categories and dimensional theories. It arranges emotions in concentric circles where inner circles are more basic and outer circles more complex. Notably, outer circles are also formed by blending the inner circle emotions. Plutchik's model, as Russell's, emanates from a circumplex representation, where emotional words were plotted based on similarity.[18] There are numerous emotions, which appear in several intensities and can be combined in various ways to form emotional "dyads".[19][20][21][22][23]

PAD emotional state model

[edit]

The PAD emotional state model is a psychological model developed by Albert Mehrabian and James A. Russell to describe and measure emotional states. PAD uses three numerical dimensions to represent all emotions.[24][25] The PAD dimensions are Pleasure, Arousal and Dominance.

  • The Pleasure-Displeasure Scale measures how pleasant an emotion may be. For instance, both anger and fear are unpleasant emotions, and score high on the displeasure scale. However, joy is a pleasant emotion.[24]
  • The Arousal-Nonarousal Scale measures how energized or soporific one feels. It is not the intensity of the emotion—for grief and depression can be low arousal intense feelings. While both anger and rage are unpleasant emotions, rage has a higher intensity or a higher arousal state. However boredom, which is also an unpleasant state, has a low arousal value.[24]
  • The Dominance-Submissiveness Scale represents the controlling and dominant nature of the emotion. For instance, while both fear and anger are unpleasant emotions, anger is a dominant emotion, while fear is a submissive emotion.[24]

Criticisms

[edit]

Cultural considerations

[edit]

Ethnographic and cross-cultural studies of emotions have shown the variety of ways in which emotions differ with cultures. Because of these differences, many cross-cultural psychologists and anthropologists challenge the idea of universal classifications of emotions altogether. Cultural differences have been observed in the way in which emotions are valued, expressed, and regulated. The social norms for emotions, such as the frequency with or circumstances in which they are expressed, also vary drastically.[26][27] For example, the demonstration of anger is encouraged by Kaluli people, but condemned by Utku Inuit.[28]

The largest piece of evidence that disputes the universality of emotions is language. Differences within languages directly correlate to differences in emotion taxonomy. Languages differ in that they categorize emotions based on different components. Some may categorize by event types, whereas others categorize by action readiness. Furthermore, emotion taxonomies vary due to the differing implications emotions have in different languages.[26] That being said, not all English words have equivalents in all other languages and vice versa, indicating that there are words for emotions present in some languages but not in others.[29] Emotions such as the schadenfreude in German and saudade in Portuguese are commonly expressed in emotions in their respective languages, but lack an English equivalent.

Some languages do not differentiate between emotions that are considered to be the basic emotions in English. For instance, certain African languages have one word for both anger and sadness, and others for shame and fear. There is ethnographic evidence that even challenges the universality of the category "emotions" because certain cultures lack a specific word relating to the English word "emotions".[27]

Lists of emotions

[edit]

Emotions are categorized into various affects, which correspond to the current situation.[30] An affect is the range of feeling experienced.[31] Both positive and negative emotions are needed in our daily lives.[32] Many theories of emotion have been proposed,[33] with contrasting views.[34]

Basic emotions

[edit]

Contrasting basic emotions

[edit]

A 2009 review[44] of theories of emotion identifies and contrasts fundamental emotions according to three key criteria for mental experiences that:

  1. have a strongly motivating subjective quality like pleasure or pain;
  2. are a response to some event or object that is either real or imagined;
  3. motivate particular kinds of behavior.

The combination of these attributes distinguishes emotions from sensations, feelings and moods.

Kind of emotion Positive emotions Negative emotions
Related to object properties Interest, curiosity, enthusiasm Alarm, panic
Attraction, desire, admiration Aversion, disgust, revulsion
Surprise, amusement Indifference, habituation, boredom
Future appraisal Hope, excitement Fear, anxiety, dread
Event-related Gratitude, thankfulness Anger, rage
Joy, elation, triumph, jubilation Sorrow, grief
Patience Frustration, restlessness
Contentment Discontentment, disappointment
Self-appraisal Humility, modesty Pride, arrogance
Social Charity Avarice, greed, miserliness, envy, jealousy
Sympathy Cruelty
Cathected Love Hate

Emotion dynamics

[edit]

Researchers distinguish several emotion dynamics, most commonly how intense (mean level), variable (fluctuations), inert (temporal dependency), instable (magnitude of moment-to-moment fluctuations), or differentiated someone's emotions are (the specificity of granularity of emotions), and whether and how an emotion augments or blunts other emotions.[45] Meta-analytic reviews show systematic developmental changes in emotion dynamics throughout childhood and adolescence and substantial between-person differences.[45]

HUMAINE's proposal for EARL

[edit]

The emotion annotation and representation language (EARL) proposed by the Human-Machine Interaction Network on Emotion (HUMAINE) classifies 49 emotions.[46]

Parrott's emotions by groups

[edit]

A tree-structured list of emotions was described in Shaver et al. (1987),[47] and also featured in Parrott (2001).[48]

Primary emotion Secondary emotion Tertiary emotion
Love Affection Adoration · Fondness · Liking · Attraction · Caring · Tenderness · Compassion · Sentimentality
Lust/Sexual desire Desire · Passion · Infatuation
Longing Longing
Joy Cheerfulness Amusement · Bliss · Gaiety · Glee · Jolliness · Joviality · Joy · Delight · Enjoyment · Gladness · Happiness · Jubilation · Elation · Satisfaction · Ecstasy · Euphoria
Zest Enthusiasm · Zeal · Excitement · Thrill · Exhilaration
Contentment Pleasure
Pride Triumph
Optimism Eagerness · Hope
Enthrallment Enthrallment · Rapture
Relief Relief
Surprise Surprise Amazement · Astonishment
Anger Irritability Aggravation · Agitation · Annoyance · Grouchy · Grumpy · Crosspatch
Exasperation Frustration
Rage Anger · Outrage · Fury · Wrath · Hostility · Ferocity · Bitterness · Hatred · Scorn · Spite · Vengefulness · Dislike · Resentment
Disgust Revulsion · Contempt · Loathing
Envy Jealousy
Torment Torment
Sadness Suffering Agony · Anguish · Hurt
Sadness Depression · Despair · Gloom · Glumness · Unhappiness · Grief · Sorrow · Woe · Misery · Melancholy
Disappointment Dismay · Displeasure
Shame Guilt · Regret · Remorse
Neglect Alienation · Defeatism · Dejection · Embarrassment · Homesickness · Humiliation · Insecurity · Insult · Isolation · Loneliness · Rejection
Sympathy Pity · Mono no aware · Sympathy
Fear Horror Alarm · Shock · Fear · Fright · Horror · Terror · Panic · Hysteria · Mortification
Nervousness Anxiety · Suspense · Uneasiness · Apprehension (fear) · Worry · Distress · Dread

Plutchik's wheel of emotions

[edit]
Plutchik's original emotion wheel
A diagram depicting the primary, secondary, and tertiary dyads

In 1980, Robert Plutchik diagrammed a wheel of eight emotions: joy, trust, fear, surprise, sadness, disgust, anger and anticipation, inspired by his Ten Postulates.[49][50] Plutchik also theorized twenty-four "Primary", "Secondary", and "Tertiary" dyads (feelings composed of two emotions).[51][52][53][54][55][56][57] The wheel emotions can be paired in four groups:

Primary dyad = one petal apart = Love = Joy + Trust
Secondary dyad = two petals apart = Envy = Sadness + Anger
Tertiary dyad = three petals apart = Shame = Fear + Disgust
Opposite emotions = four petals apart = AnticipationSurprise

There are also triads, emotions formed from 3 primary emotions, though Plutchik never describes in any detail what the triads might be.[58] This leads to a combination of 24 dyads and 32 triads, making 56 emotions at 1 intensity level.[59] Emotions can be mild or intense;[60] for example, distraction is a mild form of surprise, and rage is an intense form of anger. The kinds of relation between each pair of emotions are:

Emotions and opposites
Mild emotion Mild opposite Basic emotion Basic opposite Intense emotion Intense opposite
Serenity Pensiveness, Gloominess Joy, Cheerfulness Sadness, Dejection Ecstasy, Elation Grief, Sorrow
Acceptance, Tolerance Boredom, Dislike Trust Disgust, Aversion Admiration, Adoration Loathing, Revulsion
Apprehension, Dismay Annoyance, Irritation Fear, Fright Anger, Hostility Terror, Panic Rage, Fury
Distraction, Uncertainty Interest, Attentiveness Surprise Anticipation, Expectancy Amazement, Astonishment Vigilance
Dyads (Combinations)
Human feelings Emotions Opposite feelings Emotions
Optimism, Courage Anticipation + Joy Disapproval, Disappointment Surprise + Sadness
Hope, Fatalism Anticipation + Trust Unbelief, Shock Surprise + Disgust
Anxiety, Dread Anticipation + Fear Outrage, Hate Surprise + Anger
Love, Friendliness Joy + Trust Remorse, Misery Sadness + Disgust
Guilt, Excitement Joy + Fear Envy, Sullenness Sadness + Anger
Delight, Doom Joy + Surprise Pessimism Sadness + Anticipation
Submission, Modesty Trust + Fear Contempt, Scorn Disgust + Anger
Curiosity Trust + Surprise Cynicism Disgust + Anticipation
Sentimentality, Resignation Trust + Sadness Morbidness, Derisiveness Disgust + Joy
Awe, Alarm Fear + Surprise Aggressiveness, Vengeance Anger + Anticipation
Despair Fear + Sadness Pride, Victory Anger + Joy
Shame, Prudishness Fear + Disgust Dominance Anger + Trust
Opposite combinations[54]
Human feelings Emotions
Bittersweetness Joy + Sadness
Ambivalence Trust + Disgust
Frozenness Fear + Anger
Confusion Surprise + Anticipation

Similar emotions in the wheel are adjacent to each other.[61] Anger, Anticipation, Joy, and Trust are positive in valence, while Fear, Surprise, Sadness, and Disgust are negative in valence. Anger is classified as a "positive" emotion because it involves "moving toward" a goal,[62] while surprise is negative because it is a violation of someone's territory.[63] The emotion dyads each have half-opposites and exact opposites:[64]

Anticipation, Joy, Surprise, Sadness
+ Sadness Joy
Anticipation Pessimism Optimism
Surprise Disapproval Delight
Joy, Trust, Sadness, Disgust
+ Disgust Trust
Joy Morbidness Love
Sadness Remorse Sentimentality
Trust, Fear, Disgust, Anger
+ Fear Anger
Trust Submission Dominance
Disgust Shame Contempt
Fear, Surprise, Anger, Anticipation
+ Surprise Anticipation
Anger Outrage Aggressiveness
Fear Awe Anxiety
Trust, Surprise, Disgust, Anticipation
+ Surprise Anticipation
Trust Curiosity Hope
Disgust Unbelief Cynicism
Joy, Fear, Sadness, Anger
+ Fear Anger
Joy Guilt Pride
Sadness Despair Envy

Six emotion axes

[edit]

MIT researchers [65] published a paper titled "An Affective Model of Interplay Between Emotions and Learning: Reengineering Educational Pedagogy—Building a Learning Companion" that lists six axes of emotions with different opposite emotions, and different emotions coming from ranges.[65]

Emotional flow
Axis -1.0 -0.5 0 0 +0.5 +1.0
Anxiety – Confidence Anxiety Worry Discomfort Comfort Hopeful Confident
Boredom – Fascination Ennui Boredom Indifference Interest Curiosity Intrigue
Frustration – Euphoria Frustration Puzzlement Confusion Insight Enlightenment Epiphany
DispiritedEncouraged Dispirited Disappointed Dissatisfied Satisfied Thrilled Enthusiastic
Terror – Enchantment Terror Dread Apprehension Calm Anticipatory Excited
Humiliation – Pride Humiliated Embarrassed Self-conscious Pleased Satisfied Proud

They also made a model labeling phases of learning emotions.[65]

Negative Affect Positive Affect
Constructive Learning Disappointment, Puzzlement, Confusion Awe, Satisfaction, Curiosity
Un-learning Frustration, Discard,

Misconceptions

Hopefulness, Fresh research

The Book of Human Emotions

[edit]

Tiffany Watt Smith listed 154 different worldwide emotions and feelings.[66]

Mapping facial expressions

[edit]

Scientists map twenty-one different facial emotions[68][69] expanded from Paul Ekman's six basic emotions of anger, disgust, fear, happiness, sadness, and surprise:

Fearful Angry Surprised Disgusted
Happy Happily Surprised Happily Disgusted
Sad Sadly Fearful Sadly Angry Sadly Surprised Sadly Disgusted
Appalled Fearfully Angry Fearfully Surprised Fearfully Disgusted
Awed Angrily Surprised Angrily Disgusted
Hatred Disgustedly Surprised

See also

[edit]

Bibliography

[edit]

Notes and references

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Emotion classification is the computational process of identifying and categorizing human emotions using techniques applied to various data modalities, such as facial expressions, speech, text, and physiological signals, forming a fundamental task within the broader field of . , a term coined by Rosalind W. Picard in 1995 and detailed in her 1997 book Affective Computing, integrates principles from , , and to enable machines to recognize, interpret, process, and even simulate human emotional states, thereby enhancing interactions between humans and technology. Central to emotion classification are theoretical models of emotion, primarily the discrete model, which categorizes emotions into basic types like , , , , surprise, and as proposed by , and the dimensional model, which represents emotions along continuous axes such as valence (pleasantness) and (intensity). Recognition typically occurs through unimodal or multimodal approaches; for instance, facial emotion recognition analyzes facial expressions using datasets like CK+, achieving accuracies often exceeding 90% with methods, while speech-based classification examines prosodic features like pitch and tone, often reaching 70% accuracy, and text analysis employs to detect sentiment and emotion lexicon. The importance of emotion classification lies in its applications across domains, including mental health monitoring via wearable devices for stress detection, personalized systems that adapt to learner , customer service chatbots for sentiment-aware responses, and automotive safety through driver drowsiness detection. Recent advances, driven by architectures like convolutional neural networks and transformers, have improved multimodal fusion for more robust , addressing challenges such as cultural variations in and real-time processing constraints, though ethical concerns around and remain prominent.

Overview and Historical Context

Definition and Scope

Emotion classification is the systematic process of categorizing and distinguishing emotional states based on their phenomenological, physiological, and behavioral features to facilitate scientific inquiry and practical applications. This involves organizing —typically defined as relatively short-term, adaptive responses to specific stimuli that involve subjective feelings, physiological changes, and behavioral tendencies—distinct from broader affective phenomena. Specifically, emotions differ from moods, which are longer-lasting, less intense, and often lacking a clear eliciting event, and from affects, which encompass a wider range of valenced feeling states including both emotions and moods. The primary purposes of emotion classification are to advance by providing frameworks for analyzing emotional processes and their impacts on and , to support clinical and intervention in affective disorders by identifying maladaptive emotional patterns, and to enable advancements in for systems that enhance human-computer interaction through empathetic responses. For instance, in clinical settings, classifying emotions aids in assessing patient emotional states via multimodal data like speech and facial expressions, improving therapeutic outcomes. In AI, it underpins algorithms that detect emotions from biosignals or text, fostering applications in monitoring and personalized interfaces. Central terminologies in emotion classification include primary emotions, which are innate, biologically hardwired responses like or that serve survival functions, in contrast to secondary emotions that emerge from cognitive interpretations, social contexts, or combinations of primaries, such as guilt or . Valence refers to the hedonic tone of an emotion, ranging from positive (e.g., ) to negative (e.g., ), while denotes its physiological intensity, from low (e.g., calm) to high (e.g., excitement). Duration-based classifications further differentiate transient emotions, which are brief and stimulus-bound, from enduring emotional traits that reflect stable individual differences in affective reactivity. The scope of emotion classification extends interdisciplinarily, intersecting with to map neural correlates of emotional categories, to debate the ontological status of as mental states, and to develop computational models for real-time emotion detection. These overlaps enable holistic understandings, such as integrating brain imaging data with algorithmic predictions to refine classification schemes. Emotion classification generally employs either categorical approaches, treating as discrete types, or dimensional ones, plotting them on axes like valence and .

Historical Evolution

The classification of emotions has roots in ancient philosophy, where thinkers sought to understand affective states in relation to human behavior and ethics. In his Rhetoric, Aristotle provided one of the earliest systematic treatments of emotions (pathē), describing them as temporary disturbances of judgment that influence persuasion; he outlined specific emotions such as anger, fear, pity, and indignation, analyzing their causes, objects, and effects to aid orators in evoking them appropriately. The Stoics, including Chrysippus, further developed this by classifying passions (pathē) as irrational impulses contrary to reason, grouping them into four primary types—distress (over present evils), pleasure (over present goods), appetite (for future goods), and fear (of future evils)—with the goal of achieving apatheia, or freedom from such disturbances through rational control. The modern scientific study of emotions began in the 19th century with evolutionary perspectives. Charles Darwin's The Expression of the Emotions in Man and Animals (1872) argued that emotional expressions are innate, adaptive traits shared across , evolved through to communicate internal states and facilitate survival; this work shifted focus from philosophical speculation to biological and comparative analysis, influencing subsequent empirical research. In the late 1880s, the James-Lange theory, independently proposed by and Carl Lange, posited that emotions result from the perception of physiological changes in the body in response to stimuli, reversing the common-sense view that feelings cause bodily reactions; James articulated this in his seminal article, emphasizing that "we feel sorry because we cry, angry because we strike, afraid because we tremble." Early 20th-century psychology was dominated by Sigmund Freud's psychoanalytic framework, which viewed emotions as signals of unconscious conflicts between instinctual drives (), reality (ego), and morality (), often manifesting as anxiety or affect tied to repressed experiences; this approach prioritized and clinical over experimental methods. Following , American underwent a significant shift toward empirical, behaviorist, and later cognitive paradigms, driven by the need for measurable treatments in veteran care and funded by institutions like the ; this move diminished psychoanalysis's influence in favor of observable behaviors and experimental validation, paving the way for quantitative studies of emotional responses. In the 1970s, Paul Ekman's cross-cultural research on facial expressions reinforced Darwin's universality claims by identifying six basic emotions—anger, , , , , and surprise—as recognizably expressed worldwide. The late 20th century marked the computational turn in emotion classification with the advent of . Rosalind Picard's (1997) introduced the field, advocating for machines that recognize, interpret, and simulate human emotions to enable more natural human-computer interactions; this work bridged and , spurring developments in emotion detection via sensors and algorithms.

Categorical Models of Emotion

Discrete Emotions Framework

The discrete emotions framework posits that emotions are distinct, innate categories evolved as universal, biologically hardwired responses to specific environmental elicitors, each associated with unique physiological patterns, expressions, and adaptive functions that promote and social coordination. These responses are triggered by particular stimuli—such as threats eliciting or achievements prompting —and serve evolutionary roles, like motivating avoidance or approach behaviors to enhance fitness across species and cultures. Pioneered as a precursor in ' affect during the 1960s, this approach emphasized affects as primary motivators, hardwired mechanisms that amplify drives and organize human experience independently of cognition. This framework offers advantages in and due to its categorical simplicity, enabling straightforward classification and empirical testing of discrete states rather than continuous variations. For instance, it facilitates automated detection in by mapping specific signals, like facial muscle activations, to predefined categories such as or surprise, streamlining model development and validation. Evidence from cross-species studies supports this , showing conserved neural circuits and behavioral patterns—for example, avoidance responses to predators in and —that align with discrete emotions, suggesting deep evolutionary roots. Empirical support further bolsters the framework through observations of consistency in emotional triggers and responses across developmental stages and contexts. Infants as young as 6 months exhibit differentiated behavioral reactions to discrete emotional displays, such as approaching joyful expressions while withdrawing from ful ones, indicating innate recognition without cultural learning. Similarly, responses to threats demonstrate reliable elicitation and physiological signatures, like increased and activation, across diverse populations and scenarios, underscoring the framework's utility in predicting adaptive outcomes. These patterns, exemplified in basic emotions like and , highlight the framework's explanatory power for universal affective phenomena.

Basic Emotions Proposals

One of the most influential proposals for basic emotions comes from psychologist , who in 1972 identified six fundamental emotions based on of facial expressions: , , , , surprise, and . Ekman defined these as "basic" due to their distinctive universal facial signals that are recognized across diverse cultures, rapid onset, brief duration, involuntary occurrence, and association with specific physiological responses and adaptive functions. In contrast, proposed eight primary emotions in the 1980s as part of his psychoevolutionary theory: , trust, , surprise, , , , and . These were conceptualized as adaptive responses evolved to address fundamental survival problems, arranged in oppositional pairs and capable of blending into dyads, though Plutchik emphasized their distinct neural and behavioral profiles without the strict facial universality focus of Ekman's model. Another prominent framework is Carroll Izard's differential emotions theory, outlined in 1977, which posits 10 to 12 discrete basic emotions, including , enjoyment, surprise, distress (or ), anger, , , , , , and guilt. Izard argued these emotions are innate, hardwired patterns of neural activity with specific facial, vocal, and physiological signatures that develop early in infancy and serve distinct motivational roles, differing from Ekman by incorporating like . Supporting evidence for these basic emotion proposals draws from the (FACS), developed by Ekman and Wallace Friesen in 1978, which systematically links specific facial muscle movements (action units) to the six core emotions, demonstrating their consistency across observers. Subsequent meta-analyses in the , such as a 2021 review of numerous studies, have found small average effect sizes (d ≈ 0.13–0.23) for co-occurrence between these predicted facial signals and the corresponding emotions, indicating some but limited support and sparking debate on the strength of these associations while noting some cultural modulation in intensity judgments.

Dimensional Models of Emotion

Circumplex Model

The Circumplex Model, developed by James A. Russell in 1980, provides a two-dimensional framework for representing affective states within a circular structure. This model posits that emotions arise from combinations of two primary dimensions: valence, which captures the hedonic tone from displeasure (negative) to (positive), and , which reflects activation levels from low (deactivated or sleepy) to high (activated or excited). The circular arrangement emerges because affective experiences form a continuum, with opposite emotions positioned 180 degrees apart, such as directly across from displeasure. The core structure plots emotions on the circumference of the circle, where the horizontal axis denotes valence—ranging from left (displeasure) to right (pleasure)—and the vertical axis denotes , from bottom (low) to top (high). Specific s are located at intersections of these dimensions; for example, excitement appears in the quadrant of high positive valence and high , while depression resides in the quadrant of low negative valence and low . This configuration allows for the representation of nuanced blends rather than isolated categories, emphasizing that core affect can vary continuously in intensity and direction. The model was validated through and of self-report ratings on 28 carefully selected terms, consistently revealing a circular across diverse samples and languages. Mathematically, positions in the model are expressed as coordinates (v,a)(v, a), where vv is the valence score (typically from -1 to +1) and aa is the score (also from -1 to +1), with the origin at neutral affect and distance from the center indicating overall intensity. For instance, is commonly positioned at approximately (0.5,+0.6)(-0.5, +0.6), reflecting negative valence combined with high arousal. These coordinates derive from empirical mappings of affective terms onto the axes, enabling quantitative analysis of emotional similarity via or Euclidean metrics in the plane. The Circumplex Model has been widely applied in for mood assessment and affective measurement. It underpins tools like the (PANAS), which operationalizes positive affect (aligned with high , positive valence) and negative affect (high , negative valence) as orthogonal factors in a rotated circumplex framework, facilitating reliable self-report of emotional states.

PAD and Vector Models

The Pleasure-Arousal-Dominance (PAD) model, developed by , represents emotions as points in a defined by pleasure (or valence, ranging from positive to negative affect), (from calm to excited), and dominance (from submissive to controlling). This framework extends the two-dimensional circumplex model by incorporating dominance as a third axis to capture nuances in emotional power dynamics. Unlike purely valence- approaches, PAD allows for distinctions such as , which scores low on dominance to reflect feelings of submission or lack of control. In the PAD model, emotional states can be quantified and predicted using the dimensions; empirical validation of PAD relies on scales, where participants rate emotional stimuli on bipolar adjective pairs (e.g., happy-sad for , stimulated-relaxed for , controlling-controlled for dominance), confirming the dimensions' near-orthogonality and comprehensiveness in describing a wide range of affects. The vector model of emotions, as proposed by Fontaine et al. (2007), conceptualizes emotions as vectors within a multi-dimensional space derived from of self-reported emotional experiences across cultures. This approach identifies three primary axes—valence (pleasantness), power (similar to dominance, reflecting control versus submissiveness), and (activation level)—which together account for substantial variance in emotional ratings beyond two-dimensional models. By treating emotions as positional vectors, the model enables precise mapping and interpolation of blended states, such as anxiety (high , negative valence, low power). Both PAD and vector models have been integrated into computational systems, particularly for virtual agents, where they facilitate real-time emotion simulation and expression through parameterized behaviors like intensity or voice modulation based on dimensional scores. These applications leverage the models' mathematical structure for scalable , enhancing human-agent interactions in areas such as and gaming.

Plutchik's Wheel Model

Robert Plutchik's Wheel Model, also known as the Wheel of Emotions, is a psychoevolutionary theory that conceptualizes emotions as adaptive responses shaped by evolutionary processes to enhance survival. Developed by , the model posits that emotions function as mechanisms for problem-solving in various life contexts, such as , territoriality, and affiliation. Central to this framework is the idea that primary emotions are prototypes derived from biological imperatives, with more complex emotions arising from their combinations. This theory was fully articulated in Plutchik's book, Emotion: A Psychoevolutionary Synthesis, where he integrated empirical evidence from , , and to argue that emotions are universal yet modifiable by learning and culture. The model's visual representation takes the form of a , with eight primary emotions arranged in opposing pairs around its circumference: opposite , trust opposite , opposite , and surprise opposite . These primaries are depicted as wedges that vary in intensity from mild to extreme—for instance, escalating to rage for , or serenity to ecstasy for —illustrating how emotional states can intensify based on stimulus strength or duration. Adjacent emotions on the wheel can blend dyadically to form secondary emotions; for example, combined with trust produces , while mixed with surprise yields . This circular structure emphasizes similarities between neighboring emotions, opposition between diametric ones, and the potential for mixtures, akin to , thereby providing a dynamic that avoids rigid categorization. In therapeutic contexts, the serves as a tool for emotional , enabling individuals to identify nuanced feelings and trace them back to primary states, which supports interventions in counseling and emotional . Similarly, in design fields like (UX), it informs the elicitation of targeted emotions through interfaces, such as fostering trust in financial apps via intuitive layouts. Post-2000 adaptations have extended the model to , particularly in , where it structures multi-label emotion detection in text; for instance, hybrid approaches integrate Plutchik's blends with models to classify user sentiments in or recommender systems, improving accuracy in applications.

Comparative Lists and Taxonomies

Core Lists of Emotions

One of the most influential core lists in emotion classification is Paul Ekman's set of six basic emotions, derived from of facial expressions: , , , , , and surprise. These emotions are posited as universal, biologically hardwired responses that serve adaptive functions, such as signaling threats () or social bonding (). Brief descriptors include as a response to blockage or , prompting ; as aversion to contaminants; as preparation for danger; as pleasure from goal attainment; as reaction to loss; and surprise as a brief to novelty. Another foundational enumeration comes from William G. Parrott's framework, building on earlier prototype analyses, which identifies six primary emotions—love, joy, surprise, anger, sadness, and fear—as central categories around which more specific terms cluster. This list, detailed in Parrott's compilation of essential readings, expands into sub-emotions without strict hierarchy in its core form; for instance, joy encompasses around 25 related terms such as cheerfulness, , , , , , and zest, reflecting nuanced variations in positive affective states. includes sub-emotions like , , and , while covers , rage, and torment, emphasizing emotional prototypes derived from linguistic and experiential data. In the domain of affective computing, the HUMAINE network's Emotion Annotation and Representation Language (EARL) proposal from the mid-2000s offers a practical list of 48 emotion terms, selected for usability in human-machine interaction and consolidated from empirical observations in naturalistic settings. These terms are grouped into six broad categories based on valence and control dimensions: negative and forceful (e.g., anger, annoyance, contempt); negative and not in control (e.g., anxiety, embarrassment, fear); negative thoughts (e.g., envy, frustration, regret); neutral (e.g., boredom, interest, satisfaction); positive and forceful (e.g., elation, enthusiasm, pride); and positive thoughts (e.g., affection, happiness, love). The list prioritizes terms that are reliably distinguishable and applicable in annotation tasks, such as despair, guilt, and shame, to support computational modeling without assuming universality. Contrasting with static enumerations, Nico H. Frijda's approach incorporates temporal dynamics into emotion lists by framing emotions as modes of action readiness with distinct onset, peak, and offset patterns. In his seminal work, emotions like involve rapid onset and sustained approach tendencies for confrontation, while features quick onset and avoidance preparation that may linger as anxiety; sadness entails gradual onset and withdrawal tendencies with prolonged offset. This perspective highlights how lists of emotions—such as (expansive engagement with slow offset) or (abrupt rejection with quick resolution)—must account for duration and modulation to capture their functional roles in behavioral . Plutchik's eight primary emotions—joy, trust, fear, surprise, sadness, disgust, anger, and anticipation—provide another brief core list, emphasizing dyadic combinations for complexity.

Grouped and Hierarchical Taxonomies

One prominent hierarchical is the tree-structured model developed by Shaver et al. (1987), which organizes emotions based on derived from empirical studies of English emotion terms. This framework posits six primary emotion categories—, , surprise, , , and —as central nodes, each extending into secondary subcategories (25 in total) and tertiary specifics, encompassing approximately 135 terms overall to capture nuanced relationships among related affects. Expanding on such structures, Parrott (2001) introduced a detailed tree classification featuring the same six primary emotions, grouped into positive ( and ) and negative (, , , and surprise) families for broader organization. Each primary branches into secondary emotions (27 total) and tertiary ones (92 total), illustrating hierarchical nesting; for instance, includes secondary and rage, with tertiary examples like aggravation under and fury under rage. This model emphasizes familial resemblances while accommodating specificity in emotional experience. In a culturally informed grouping approach, Tiffany Watt Smith (2015) compiled The Book of Human Emotions, an encyclopedia of 154 terms drawn from global sources, presented alphabetically yet clustered thematically by cultural and historical contexts to highlight interconnections. Examples include (German for pleasure derived from others' misfortune, grouped with envious joys) and (Portuguese longing for an absent ideal, linked to melancholic yearnings), underscoring how emotions form relational clusters beyond universal basics. Emotion dynamics introduce a temporal to classifications, focusing on how affects evolve over time through physiological markers. Kreibig (2010) reviewed 134 studies on responses, revealing patterned changes such as rapidly rising sympathetic activation in acute versus gradual falling patterns in sustained , enabling dynamic taxonomies that layer static categories with onset, peak, and offset distinctions for more process-oriented grouping.

Specialized Proposals

The Positive and Negative Affect (PANA) model, proposed by Watson and Tellegen in 1985, posits two orthogonal dimensions of affect: positive activation, characterized by and , and negative activation, marked by distress and . This framework diverges from traditional circumplex models by rotating axes 45 degrees to emphasize high- and low-arousal states within positive and negative valences, enabling finer distinctions in mood structures without assuming bipolar opposites. The model's influence persists in psychological assessment tools like the PANAS scale, which measures these factors to classify emotional states in clinical and research settings. Building on basic emotion theories, expansions into multi-axial frameworks have proposed more nuanced categorizations. For instance, Cowen and Keltner's 2017 atlas, derived from self-reports elicited by 2,185 video stimuli, identifies 27 distinct emotion categories—such as , , and —that form continuous gradients rather than discrete boundaries, contrasting with the six basic emotions by revealing overlaps like amusement blending into excitement. These categories emerge across diverse cultural contexts, with showing they align along six semantic axes (e.g., valence, , and ), providing a richer for understanding emotional variability beyond binary or low-dimensional models. Neuroscience-informed constructionist approaches further specialize emotion classification by viewing emotions as emergent from core psychological ingredients rather than innate circuits. Lindquist et al.'s 2012 meta-analysis of 151 functional neuroimaging experiments (from 143 articles) supports this, finding no dedicated brain regions for specific emotions but instead domain-general networks for conceptualization, , and core affect that construct experiences like or contextually. This challenges locationist views, proposing lists of constructed emotions (e.g., from meta-reviews of experiential reports) that prioritize situational and linguistic factors over fixed categories. Post-2020 advancements in have introduced AI-driven proposals for multimodal emotion classification, integrating physiological, textual, and visual data. Recent multimodal frameworks in fuse modalities via architectures (e.g., transformers) to classify emotions with improved accuracies on datasets like IEMOCAP, addressing gaps in unimodal approaches by capturing cross-modal correlations for real-world applications. These proposals extend dimensional bases by incorporating dynamic, data-driven ontologies that adapt to user-specific contexts.

Criticisms and Cultural Variations

Methodological Critiques

One major methodological critique of emotion classification approaches, particularly basic emotions proposals, stems from their overreliance on Western samples, which introduces cultural bias and undermines claims of universality. For instance, studies using cluster analysis of facial muscle movements have shown that Western participants represent the six basic emotions with distinct patterns, whereas East Asian participants exhibit more overlap, suggesting that Ekman's universals may reflect Western cultural norms rather than innate categories. This sampling bias has been highlighted in 2000s research, where cross-sample comparisons revealed that emotion recognition accuracy drops significantly when Western-trained models are applied to non-Western groups, questioning the generalizability of discrete emotion frameworks. The ongoing debate between discrete and dimensional models further exposes theoretical limitations, with evidence indicating that emotions often manifest as blends rather than pure categories, challenging the boundaries of systems. Barrett's psychological constructionism posits that emotions are not natural kinds with fixed essences but are constructed from core affect, conceptualization, and context, as supported by empirical findings showing inconsistent categorization across individuals and situations. This view undermines discrete models by demonstrating that emotional experiences frequently hybridize, such as blending with , which dimensional approaches like the circumplex model better accommodate but still struggle to fully capture without additional contextual layers. Measurement challenges compound these issues, particularly with self-reports prone to biases and physiological indicators lacking unique signatures per emotion. Self-report methods, while common for assessing subjective , are susceptible to retrospective , demand , and cultural influences on emotional labeling, leading to low convergence with other modalities. Similarly, reviews of responses across 134 studies found only modest specificity for emotions, with significant overlap in patterns like for and , indicating no reliable "fingerprints" to validate discrete classifications. Recent 2020s critiques, informed by research, highlight the outdated assumption of static categories, emphasizing instead their dynamic, context-dependent nature shaped by brain adaptability. studies reveal that emotional processing circuits, such as those involving the , exhibit plasticity in response to and environment, allowing representations to vary across contexts and individuals rather than adhering to fixed schemas. This adaptability challenges traditional classifications by showing that what is labeled as a "basic" can reorganize through learning and neuroplastic mechanisms, rendering rigid taxonomies insufficient for capturing real-world variability.

Cross-Cultural Considerations

Cultural relativity in emotion classification is highlighted by ethnographic studies demonstrating that emotions are not universal but deeply embedded in specific cultural contexts. Catherine Lutz's research on the Ifaluk people of Micronesia revealed unique emotional concepts, such as "fago," which encompasses compassion, sadness, and frustration in a way that defies Western categorical distinctions, challenging the assumption of discrete, cross-culturally consistent emotions. This work underscores how cultural practices shape emotional lexicons and experiences, suggesting that classification systems derived from Western samples may overlook or misinterpret non-Western emotional realities. Cultural display rules further complicate universal emotion models by dictating when and how emotions are expressed, varying significantly across societies. In the 1990s, and collaborators, building on earlier neurocultural theory, acknowledged these variations through studies showing that Japanese participants often suppress negative facial expressions in social settings, such as when observed by authority figures, contrasting with more overt displays in individualistic cultures like the . These rules, influenced by social norms, can mask underlying emotional universals, leading to misclassifications in cross-cultural assessments. Differences between collectivist and individualist cultures also affect emotion emphasis and classification, particularly in self-conscious emotions. Hazel Markus and Shinobu Kitayama's seminal analysis illustrated how interdependent selves in collectivist societies, such as , prioritize shame tied to social harmony, while independent selves in individualist cultures, like the , emphasize guilt focused on personal standards. This cultural divergence implies that emotion taxonomies must account for relational versus autonomous dimensions to avoid ethnocentric biases. Recent advancements in AI datasets have reinforced these insights by uncovering distinct non-Western emotion clusters that deviate from traditional Western models. Batja Mesquita's relational models from the 2010s frame emotions as emerging from interpersonal and cultural contexts rather than isolated states, a perspective validated in 2020s AI studies where datasets from diverse regions reveal unique emotional patterns, such as context-dependent blends in South Asian or African samples that challenge binary or basic emotion frameworks. These findings highlight the need for inclusive datasets to improve emotion classification accuracy across cultures. These cultural and bias-related challenges have prompted regulatory responses, including the European Union's AI Act, which prohibits systems in workplaces and educational settings to safeguard and prevent based on inaccurate or culturally insensitive classifications, with the ban effective from February 2, 2025.

Applications in Expression Mapping

Facial Expression Analysis

Facial expression analysis serves as a primary method for classifying by examining visible muscle movements on the face, which are interpreted as indicators of internal emotional states. This approach relies on the systematic coding of facial behaviors to map expressions to discrete emotion categories, such as , , or surprise. Pioneered in , it has evolved into computational systems that automate recognition for applications in human-computer interaction and . The foundational framework for this analysis is the (FACS), developed by and Wallace V. Friesen in 1978. FACS decomposes facial movements into 44 action units (AUs), each corresponding to specific muscle activations, allowing researchers to objectively describe expressions without inferring underlying emotions. For instance, surprise is often coded as the combination of AU1 (inner brow raiser) and AU2 (outer brow raiser), along with AU5 (upper lid raiser) and AU26 (jaw drop). This system enables detailed annotation of both subtle and overt expressions, facilitating reliable emotion classification across studies. Cross-culturally, facial expressions demonstrate partial universality, with recognition accuracies typically ranging from 70% to 80% for basic emotions like happiness and disgust, supporting the idea of innate facial signals. However, cultural variations introduce "dialects" in expression patterns, where Eastern observers, for example, show lower accuracy in distinguishing fear from surprise compared to Westerners, due to differences in display rules and categorization models. A 2012 study using data-driven modeling revealed that while core expression components are shared, cultural contexts modulate their intensity and combination, challenging strict universality claims. In technological applications since the , models, particularly convolutional neural networks (), have enabled real-time facial by processing image sequences or video frames to detect AU patterns or holistic features. These models achieve high performance on benchmark datasets, often exceeding 90% accuracy for controlled settings, and integrate with FACS for hybrid systems that combine anatomical precision with end-to-end learning. For example, architectures trained on large corpora of labeled faces allow deployment in devices for monitoring driver drowsiness or user sentiment in . Limitations in facial expression analysis include challenges in detecting micro-expressions—brief, involuntary flashes lasting 1/25 to 1/5 of a second that reveal concealed emotions—and distinguishing posed from genuine expressions. Ekman's 2003 analysis highlighted that genuine emotions involve specific, involuntary muscle actions (e.g., the Duchenne with orbicularis oculi contraction), whereas posed ones often lack these and appear more symmetrical or prolonged. Automated systems struggle with these nuances, particularly in naturalistic settings with occlusions or low lighting, reducing reliability for subtle or deceptive cues.

Broader Mapping Techniques

Physiological mapping techniques leverage autonomic responses to classify emotions beyond facial cues, providing objective indicators of internal states. (HRV), a measure of fluctuations in time between heartbeats, is widely used to detect dimensions of emotions, with patterns such as reduced HRV during high- states like or , and increased variability in calmer emotions like . These patterns stem from sympathetic and influences, as detailed in Kreibig's review of 134 studies, which identified discrete autonomic signatures for emotions including (moderate HR acceleration) and (bradycardia). Similarly, (EEG) facilitates valence classification by analyzing brainwave asymmetries, particularly frontal alpha power, where greater relative left frontal activity correlates with positive valence and right with negative. This approach, rooted in Davidson's foundational work on hemispheric differences, enables models to achieve up to 80% accuracy in binary valence detection using features like alpha band power from datasets such as DEAP. Vocal and postural channels offer additional non-facial avenues for emotion classification, capturing expressive variations in speech and body dynamics. In prosody analysis, acoustic features like (pitch) are key; for instance, elevated pitch and wider range often signal , alongside increased speech rate and intensity, distinguishing it from lower-pitched sadness. Machine learning models extract these from audio signals, achieving recognition rates of 70-85% for discrete emotions on corpora like IEMOCAP. Postural mapping examines codes, such as forward-leaning postures for approach-oriented emotions like joy or tense rigidity for fear. Wallbott's empirical study across participants from multiple countries revealed that specific movement qualities—e.g., expansive gestures for and slumped shoulders for —are reliably recognized, supporting partial universality in bodily expressions despite cultural nuances. Multimodal integration combines these physiological, vocal, and postural signals with other modalities, such as data, using fusion architectures to enhance robustness and accuracy. In the , transformer-based models and convolutional neural networks have fused audio prosody with visual cues, yielding superior performance; for example, hybrid feature-level and decision-level fusion on datasets like CMU-MOSEI has reached 88-92% accuracy for valence-arousal , outperforming unimodal baselines by 10-15%. These methods employ mechanisms to weigh modality contributions dynamically, addressing issues like noisy environments where voice or provides complementary evidence. Advancements in 2025-era wearable technology have expanded real-time broader mapping, with devices like smartwatches integrating HRV, galvanic skin response, and accelerometer data for continuous stress and arousal monitoring. For instance, models in fitness trackers use machine learning on physiological signals to detect elevated stress with reported accuracies of 80-90% in some ambulatory studies, though performance varies and challenges like distinguishing stress from physical exertion persist, with overall real-world reliability still under evaluation as of 2025. However, 2025 reviews highlight limitations, including potential misclassification of stress with physical activity, underscoring the need for improved algorithms. As of 2025, this shift toward ubiquitous, non-invasive tools continues, with ongoing emphasis on improving ecological validity amid challenges in real-world accuracy.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.