Hubbry Logo
BioarchaeologyBioarchaeologyMain
Open search
Bioarchaeology
Community hub
Bioarchaeology
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Bioarchaeology
Bioarchaeology
from Wikipedia

Bioarchaeology (osteoarchaeology, osteology or palaeo-osteology[1]) in Europe describes the study of biological remains from archaeological sites. In the United States it is the scientific study of human remains from archaeological sites.

The term was minted by British archaeologist Grahame Clark who, in 1972, defined it as the study of animal and human bones from archaeological sites. Jane Buikstra came up with the current US definition in 1977. Human remains can inform about health, lifestyle, diet, mortality and physique of the past.[2] Although Clark used it to describe just human remains and animal remains, increasingly archaeologists include botanical remains.[3]

Bioarchaeology was largely born from the practices of New Archaeology, which developed in the United States in the 1970s as a reaction to a mainly cultural-historical approach to understanding the past. Proponents of New Archaeology advocate testing hypotheses about the interaction between culture and biology, or a biocultural approach. Some archaeologists advocate a more holistic approach that incorporates critical theory.[4]

Paleodemography

[edit]
A skeleton in a bioarchaeology lab

Paleodemography studies demographic characteristics of past populations.[5] Bioarchaeologists use paleodemography to create life tables, a type of cohort analysis, to understand zdemographic characteristics (such as risk of death or sex ratio) of a given age cohort within a population. It is often necessary to estimate the age and sex of individuals based on specific morphological characteristics of the skeleton.

Age

[edit]

Age estimation attempts to determine the skeletal/biological age-at-death. The primary assumption is that an individual's skeletal age is closely associated with their chronological age. Age estimation can be based on patterns of growth and development or degenerative changes in the skeleton.[6] A variety of skeletal series methods to assess these types of changes have been developed. For instance, in children age is typically estimated by assessing dental development, ossification and fusion of specific skeletal elements, or long bone length.[7] For children, different teeth erupt from the gums serially are the most reliable for telling a child's age. However, fully developed teeth are less indicative.[8] In adults, degenerative changes to the pubic symphysis, the auricular surface of the ilium, the sternal end of the 4th rib, and dental attrition are commonly used to estimate skeletal age.[9][10][11]

Until the age of about 30, human bones keep growing. Different bones fuse at different points of growth.[12] This development can vary acros individuals. Wear and tear on bones further complicates age estimates. Often, estimates are limited to 'young' (20–35 years), 'middle' (35–50 years), or 'old' (50+ years).[8]

Sex

[edit]

Differences in male and female skeletal anatomy are used by bioarchaeologists to determine the biological sex of human skeletons. Humans are sexually dimorphic, although overlap in body shape and sexual characteristics is possible. Not all skeletons can be assigned a sex, and some may be wrongly identified. Biological males and biological females differ most in the skull and pelvis; bioarchaeologists focus on these body parts, although other body parts can be used. The female pelvis is generally broader than the male pelvis, and the angle between the two inferior pubic rami (the sub-pubic angle) is wider and more U-shaped, while the sub-pubic angle of the male is more V-shaped and less than 90 degrees.[13][14]

In general, the male skeleton is more robust than the female skeleton because of male's greater muscles mass. Male skeletons generally have more pronounced brow ridges, nuchal crests, and mastoid processes. Skeletal size and robustness are influenced by nutrition and activity levels. Pelvic and cranial features are considered to be more reliable indicators of biological sex. Sexing skeletons of young people who have not completed puberty is more difficult and problematic, because the body has not fully developed.[13]

Bioarchaeological sexing of skeletons is not error-proof. Recording errors and re-arranging of human remains may play a part in such misidentification.

Direct testing of bioarchaeological methods for sexing skeletons by comparing gendered names on coffin plates from the crypt at Christ Church, Spitalfields, London to the associated remains achieved a 98 percent success rate.[15]

Gendered work patterns may leave marks on bones and be identifiable in the archaeological record. One study found extremely arthritic big toes, a collapse of the last dorsal vertebrae, and muscular arms and legs among female skeletons at Abu Hureyra, interpreting this as indicative of gendered work patterns.[16] Such skeletal changes could have resulted from women spending long periods kneeling while grinding grain with the toes curled forward. Investigation of gender from mortuary remains is of growing interest to archaeologists.[17]

Modern sex determination methods

[edit]

Recent developments in bioarchaeological methods have introduced more accurate and standardized techniques for sex estimation, especially when skeletal preservation is poor. Metric analyses of pelvic morphology using tools such as the Diagnose Sexuelle Probabiliste (DSP) method have achieved over 95% accuracy in adult individuals when analyzing the os coxae, using discriminant functions based on population-specific reference data.[18] Geometric morphometric analyses of cranial and pelvic landmarks, particularly when paired with statistical classifiers or machine learning algorithms, have also shown high success rates in identifying sex across both forensic and archaeological samples.[19] Molecular techniques have also become integrated into bioarchaeological practice. Ancient DNA (aDNA) shotgun sequencing enables near-perfect sex determination by quantifying X- and Y-chromosome reads, proving especially valuable when osteological indicators are absent or ambiguous.[20] Where DNA preservation is insufficient, dental proteomics as detecting amelogenin peptides in tooth enamel provides a minimally destructive and highly reliable alternative for sex estimation.[21]

Non-specific stress indicators

[edit]

Dental non-specific stress indicators

[edit]

Dental non-specific stress indicators are features found on teeth that reflect episodes of physiological stress experienced during childhood, particularly during the period of enamel formation. They are described as "non-specific" because, while they signal that a stress event occurred, they do not identify the exact cause, such as whether it resulted from malnutrition, illness, or infection.

Enamel forms through a process called amelogenesis, carried out by specialized cells known as ameloblasts, which produce enamel in sequential layers. When these cells are affected by systemic stress, the enamel formation process can be interrupted or altered, resulting in visible developmental defects.[22][23][24]

Enamel hypoplasia

[edit]

Enamel hypoplasia refers to transverse furrows or pits that form in the enamel surface of teeth when the normal process of tooth growth stops, leaving a deficit. Enamel hypoplasias generally form due to disease and/or poor nutrition.[13] Linear furrows are commonly referred to as linear enamel hypoplasias (LEHs); LEHs can range in size from microscopic to visible to the naked eye. By examining the spacing of perikymata grooves (horizontal growth lines), the duration of the stressor can be estimated,[25] although Mays argued that the width of the hypoplasia bears only an indirect relationship to the duration of the stressor.

Studies of dental enamel hypoplasia are used to study child health. Unlike bone, teeth are not remodeled, so intact enamel can provide a more reliable indicator of past health events. Dental hypoplasias provide an indicator of health status during the time in childhood when the enamel of the tooth crown is forming. The presence, frequency, and severity of enamel hypoplasia (EH) offer valuable information about general health conditions and the occurrence of disease or malnutrition. Elevated rates of EH are often interpreted as evidence of widespread physiological stress, such as famine, infectious disease outbreaks, or prolonged nutritional deficiencies.[26][27] By comparing the prevalence of dental stress indicators across various groups such as different social classes, geographic regions, or time periods bioarcheologists can also infer disparities in living conditions, access to resources, and overall health.[28][29]

Not all enamel layers are visible on the tooth surface because enamel layers that are formed early in crown development are buried by later layers. Hypoplasias on this part of the tooth do not show on the tooth surface. Because of this buried enamel, teeth record stressors form a few months after the start of the event. The proportion of enamel crown formation time represented by this buried enamel varies from up to 50 percent in molars to 15-20 percent in anterior teeth.[13] Surface hypoplasias record stressors occur from about one to seven years, or up to 13 years if the third molar is included.[30]

Skeletal non-specific stress indicators

[edit]

Porotic hyperostosis/cribra orbitalia

[edit]

It was long assumed that iron deficiency anemia has marked effects on the flat bones of the cranium of infants and young children. That as the body attempts to compensate for low iron levels by increasing red blood cell production in the young, sieve-like lesions develop in the cranial vaults (termed porotic hyperostosis) and/or the orbits (termed cribra orbitalia). This bone is spongy and soft.[4]

It is however, unlikely that iron deficiency anemia is a cause of either porotic hyperostosis or cribra orbitalia.[31] These are more likely the result of vascular activity in these areas and are unlikely to be pathological. The development of cribra orbitalia and porotic hyperostosis could also be attributed to other causes besides a dietary iron deficiency, such as nutrients lost to intestinal parasites. However, dietary deficiencies are the most probable cause.[32]

Anemia incidence may be a result of inequalities within society, and/or indicative of different work patterns and activities among different groups within society. A study of iron-deficiency among early Mongolian nomads showed that although overall rates of cribra orbitalia declined from 28.7 percent (27.8 percent of the total female population, 28.4 percent of the total male population, 75 percent of the total juvenile population) during the Bronze and Iron Ages, to 15.5 percent during the Hunnu (2209–1907 BP) period, the rate of females with cribra orbitalia remained roughly the same, while incidence among males and children declined (29.4 percent of the total female population, 5.3 percent of the total male population, and 25 percent of the juvenile population had cribra orbitalia). This study hypothesized that adults may have lower rates of cribra orbitalia than juveniles because lesions either heal with age or lead to death. Higher rates of cribia orbitalia among females may indicate lesser health status, or greater survival of young females with cribia orbitalia into adulthood.[33]

Harris lines

[edit]

Harris lines form before adulthood, when bone growth is temporarily halted or slowed down due to some sort of stress (typically disease or malnutrition).[34] During this time, bone mineralization continues, but growth does not, or does so at reduced levels. If and when the stressor is overcome, bone growth resumes, resulting in a line of increased mineral density visible in a radiograph.[32] Absent removal of the stressor, no line forms.[35]

Particularly, deficiencies in protein and vitamins, which lead to delayed longitudinal bone growth, can result in the formation of Harris lines.[36] During the process of endochondral bone growth, the cessation of osteoblastic activity results in the deposition of a thin layer of bone beneath the cartilage cap, potentially forming Harris lines.[37][38] Subsequent recovery, necessary for the restoration of osteoblastic activity, is also implicated in Harris line formation.[39] When matured cartilage cells reactivate, bone growth resumes, thickening the bony stratum. Therefore, complete recovery from periods of chronic illness or malnutrition manifests as transverse lines on radiographs. Lines tend to be thicker with prolonged and severe malnutrition. Harris line formation typically peaks in long bones around 2–3 years after birth and becomes rare after the age of 5 until adulthood. Harris lines occur more frequently in boys than in girls.[40]

Hair

[edit]

The stress hormone cortisol is deposited in hair as it grows. This has been used successfully to detect fluctuating levels of stress in the later lifespan of mummies.[41]

Mechanical stress and activity indicators

[edit]

Examining the effects that activities has upon the skeleton allows the archaeologist to examine who was doing what kinds of labor, and how activities were structured within society. Labor within the household may be divided according to gender and age, or be based on other social structures. Human remains can allow archaeologists to uncover these patterns.

Living bones are subject to Wolff's law, which states that bones are physically affected and remodeled by physical activity or inactivity.[42] Increases in mechanical stress tend to produce thicker and stronger bones. Disruptions in homeostasis caused by nutritional deficiency or disease[43] or profound inactivity/disuse/disability can lead to bone loss.[44] While the acquisition of bipedal locomotion and body mass appear to determine the size and shape of children's bones,[45][46][47] activity during the adolescent growth period seems to exert a greater influence on the size and shape of adult bones than exercise later in life.[48]

Muscle attachment sites (entheses) have been thought to be impacted in the same way, causing entheseal changes.[49][50] These changes were widely used to study activity-patterns,[51] but research has shown that processes associated with aging have a greater impact than occupational stresses.[52][53][54][55][56][57] It has also been shown that geometric changes to bone structure (described above) and entheseal changes differ in their underlying cause with the latter little affected by occupation.[58][59] Joint changes, including osteoarthritis, have been used to infer occupations, but in general these are also manifestations of the aging process.[51]

Markers of occupational stress, which include morphological changes to the skeleton and dentition as well as joint changes at specific locations have been widely used to infer specific (rather than general) activities.[60] Such markers are often based on single cases described in late nineteenth century clinical literature.[61] One such marker has been found to be a reliable indicator of lifestyle: the external auditory exostosis also called surfer's ear, which is a small bony protuberance in the ear canal that occurs in those working in proximity to cold water.[62][63]

One example of how these changes have been used to study activities is the New York African Burial Ground in New York. This provides evidence of the brutal working conditions under which the enslaved labored;[64] osteoarthritis of the vertebrae was common even among the young. The pattern of osteoarthritis combined with the early age of onset provides evidence of labor that resulted in mechanical strain to the neck. One male skeleton shows stress lesions at 37 percent of 33 muscle or ligament attachments, showing he experienced significant musculoskeletal stress. Overall, the interred show signs of significant musculoskeletal stress and heavy workloads, although workload and activities varied by individual. Some show high levels of stress, while others do not. This indicates the variety of types of labor (e.g., domestic vs. carrying heavy loads) labor.

Injury and workload

[edit]

Fractures to bones during or after excavation appear relatively fresh, with broken surfaces appearing white and unweathered. Distinguishing between fractures around the time of death and post-depositional fractures in bone is difficult, as both types of fractures show signs of weathering. Unless evidence of bone healing or other factors are present, researchers may choose to regard all weathered fractures as post-depositional.[13]

Evidence of perimortal fractures (or fractures inflicted on a fresh corpse) can be distinguished in unhealed metal blade injuries to the bones. Living or freshly dead bones are somewhat resilient, so metal blade injuries to bone generate a linear cut with relatively clean edges rather than irregular shattering.[13] Archaeologists have attempted to use the microscopic parallel scratch marks on cut bones in order to estimate the trajectory of the blade that caused the injury.[65]

Diet and dental health

[edit]

Dental caries are caused by localized destruction of tooth enamel, as a result of acids produced by bacteria feeding upon and fermenting carbohydrates in the mouth.[66] Agriculture is strongly associated with a higher rate of caries than foraging, because of the associated higher levels of carbohydrates produced by agriculture.[35] For example, bioarchaeologists have used caries in skeletons to correlate a diet of rice with disease.[67] Women may be more vulnerable to caries compared to men due to having lower saliva flow, the positive correlation of estrogen with increased caries rates, and because of pregnancy-associated physiological changes, such as suppression of the immune system and a possible concomitant decrease in antimicrobial activity in the oral cavity.[68]

Stable isotope analysis

[edit]

Stable isotope biogeochemistry uses variations in isotopic signatures and relates them to biogeochemical processes. The science is based on the preferential fractionation of lighter or heavier isotopes, which results in enriched and depleted isotopic signatures compared to a standard value. Essential elements for life such as carbon, nitrogen, oxygen, and sulfur are the primary stable isotope systems used to interrogate archeological discoveries. Isotopic signatures from multiple systems are typically used in tandem to create a comprehensive understanding of the analyzed material. These systems are most commonly used to trace the geographic origin of archaeological remains and investigate the diets, mobility, and cultural practices of ancient humans.[69][70]

Applications

[edit]

Carbon

[edit]

Stable isotope analysis of carbon in human bone collagen allows bioarchaeologists to carry out dietary reconstruction and to make nutritional inferences. These chemical signatures reflect long-term dietary patterns, rather than a single meal or feast. Isotope ratios in food, especially plant food, are directly and predictably reflected in bone chemistry,[71] allowing researchers to partially reconstruct recent diet using stable isotopes as tracers.[72][73] Stable isotope analysis monitors the ratio of carbon 13 to carbon 12 (13C/12C), which is expressed as parts per thousand using delta notation (δ13C).[74] The 13C and 12C ratio is either depleted (more negative) or enriched (more positive) relative to a standard.[75] 12C and 13C occur in a ratio of approximately 98.9 to 1.1.[75]

The composition of carbon dioxide in the atmosphere influences the isotopic values of C3 and C4 plants, which then impacts the δ13C of consumer collagen and apatite based on their diets.[76] The values in this diagram are average δ13C compositions for the respective categories based on Fig 11.1 in Staller et al. (2010).

The ratio of carbon isotopes in humans varies according to the types of plants digested with different photosynthesis pathways. The three photosynthesis pathways are C3 carbon fixation, C4 carbon fixation and Crassulacean acid metabolism. C4 plants are mainly grasses from tropical and subtropical regions, and are adapted to higher levels of radiation than C3 plants. Corn, millet[77] and sugar cane are some well-known C4 crops, while trees and shrubs use the C3 pathway.[78] C4 carbon fixation is more efficient when temperatures are high and atmospheric CO2 concentrations are low.[79] C3 plants are more common and numerous than C4 plants as C3 carbon fixation is more efficient in a wider range of temperatures and atmospheric CO2 concentrations.[78]

The different photosynthesis pathways used by C3 and C4 plants cause them to discriminate differently towards 13C leading to distinctly different ranges of δ13C. C4 plants range between -9 and -16‰, and C3 plants range between -22 and -34‰.[72] The isotopic signature of consumer collagen is close the δ13C of dietary plants, while apatite, a mineral component of bones and teeth, has an ~14‰ offset from dietary plants due fractionation associated with mineral formation.[79] Stable carbon isotopes have been used as tracers of C4 plants in paleodiets. For example, the rapid and dramatic increase in 13C in human collagen after the adoption of maize agriculture in North America documents the transition from a C3 to a C4 (native plants to corn) diet by 1300 CE.[80][81]

Skeletons excavated from the Coburn Street Burial Ground (1750 to 1827 CE) in Cape Town, South Africa, were analyzed using stable isotope data in order to determine geographical histories and life histories.[82] The people buried in this cemetery were assumed to be slaves and members of the underclass based on the informal nature of the cemetery; biomechanical stress analysis[83] and stable isotope analysis, combined with other archaeological data, seem to support this supposition.

Based on stable isotope levels, one study reported that eight Cobern Street Burial Ground individuals consumed a diet based on C4 (tropical) plants in childhood, then consumed more C3 plants, which were more common there later in their lives. Six of these individuals had dental modifications similar to those carried out by peoples inhabiting tropical areas known to be targeted by slavers who brought enslaved individuals from other parts of Africa to the colony. Based on this evidence, it was argued that these individuals represent enslaved persons from areas of Africa where C4 plants were consumed and who were brought to the Cape as laborers. These individuals were not assigned to a specific ethnicity, but similar dental modifications are carried out by the Makua, Yao, and Marav peoples. Four individuals were buried with no grave goods, in accordance with Muslim tradition, facing Signal Hill, which is a point of significance for local Muslims. Their isotopic signatures indicate that they grew up in a temperate environment consuming mostly C3 plants, but some C4 The study argued that these individuals were from the Indian Ocean area. It also suggested that these individuals were Muslims. It argued that stable isotopic analysis of burials, combined with historical and archaeological data were an effective way of investigating the migrations forced by the African Slave Trade, as well as the emergence of the underclass and working class in the Old World.[82]

Nitrogen

[edit]

The nitrogen stable isotope system is based on the relative enrichment/depletion of 15N in comparison to 14N in δ15N. Carbon and nitrogen stable isotope analyses are complementary in paleodiet studies. Nitrogen isotopes in bone collagen are ultimately derived from dietary protein, while carbon can be contributed by protein, carbohydrate, or fat.[84] δ13C values help distinguish between dietary protein and plant sources while systematic increases in δ15N values as you move up in trophic level helps determine the position of protein sources in the food web.[70][85][86] 15N increases 3-4% with each trophic step upward.[87][88] It has been suggested that the relative difference between human δ15N values and animal protein values scales with the proportion of that animal protein in the diet,[89] though this interpretation has been questioned due to contradictory views on the impact of nitrogen intake through protein consumption and nitrogen loss through waste release on 15N enrichment in the body.[86]

Variations in nitrogen values within the same trophic level are also considered.[90] Nitrogen variations in plants, for example, can be caused by plant-specific reliance on nitrogen gas which causes the plant to mirror atmospheric values.[90] Enriched or higher δ15N values can be achieved in plants that grew in soil fertilized by animal waste.[90] Nitrogen isotopes have been used to estimate the relative contributions of legumes verses nonlegumes, as well as terrestrial versus marine resources.[87][72][91] While other plants have δ15N values that range from 2 to 6‰,[87] legumes have lower 14N/15N ratios (close to 0‰, i.e. atmospheric N2) because they can fix molecular nitrogen, rather than having to rely on soil nitrates and nitrites.[84][90] Therefore, one potential explanation for lower δ15N values in human remains is an increased consumption of legumes or animals that eat them. 15N values increase with meat consumption, and decrease with legume consumption. The 14N/15N ratio could be used to gauge the contribution of meat and legumes to the diet.

Oxygen

[edit]

The oxygen stable isotope system is based on the 18O/16O (δ18O) ratio in a given material, which is enriched/depleted relative to a standard. The field typically normalizes to both Vienna Standard Mean Ocean Water (VSMOW) and Standard Light Antarctic Precipitation (SLAP).[92] This system is famous for its use in paleoclimatic studies but it also a prominent source of information in bioarchaeology.

Variations in δ18O values in skeletal remains are directly related to the isotopic composition of the consumer's body water. isotopic composition of mammalian body water is primarily controlled by consumed water.[92] δ18O values of freshwater drinking sources vary due to mass fractionations related to mechanisms of the global water cycle.[93] Evaporated water vapor is more enriched in 16O (isotopically lighter; more negative delta value) compared to the remaining water, which is depleted in 16O (isotopically heavier; more positive delta value).[92][93] An accepted first-order approximation for the isotopic composition of animal drinking water is local precipitation, though this is complicated to varying degrees by confounding water sources like natural springs or lakes.[92] The baseline δ18O used in archaeological studies is modified depending on the relevant environmental and historical context.[92]

δ18O values of bioapatite in human skeletal remains are assumed to have formed in equilibrium with body water, thus providing a species-specific relationship to oxygen isotopic composition of body water.[94] The same cannot be said for human bone collagen, as δ18O values in collagen seem to be impacted by drinking water, food water, and a combination of metabolic and physiological processes.[95] δ18O values from bone minerals are essentially an averaged isotopic signature throughout the entire life of the individual.[96]

While carbon and nitrogen are used primarily to investigate the diets of ancient humans, oxygen isotopes offer insight into body water at different life stages. δ18O values are used to understand drinking behaviors,[97] animal husbandry,[98] and track mobility.[99] 97 burials from the ancient Maya citadel of Tikal were studied using oxygen isotopes.[100] Results from tooth enamel identified statistically different individuals, interpreted to be individuals from Maya lowlands, Guatemala, and potentially Mexico.[100] Historical context combined with isotopic data from burials were used to argue that migrant individuals were a part of lower and higher social classes within Tikal.[100] Female migrants who arrived in Tikal during Early Classic period could have been the brides of Maya elite.[100]

Sulfur

[edit]

The sulfur stable isotope system is based on small, mass-dependent fractionations of sulfur isotopes. These fractionations are reported relative to Canyon Diablo Troilite (V-CDT), the agreed upon standard. The ratio of the most abundant sulfur isotope, 32S, compared to rarer isotopes such as, 33S, 34S, and 36S, is used to characterize biological signatures and geological reservoirs. The fractionation of 34S (δ34S) is particularly useful since it is the most abundant of the rare isotopes. This system is less commonly used on its own and typically complements studies of carbon and nitrogen.[101][102] In bioarchaeology, the sulfur system has been used to investigate paleodiets and spatial behaviors through the analysis of hair and bone collagen.[103] Dietary proteins incorporated into living organisms tend to determine the stable isotope values of their organic tissues. Methionine and cysteine are the canonical sulfur-containing amino acids. Of the two, δ34S values of methionine are considered to better reflect isotopic compositions of dietary sulfur, since cysteine values are impacted by diet and internal cycling.[103] While other stable isotope systems have significant trophic shifts, sulfur shows only a small shift (~0.5‰).[103]

Figure 3 Illustration of different ecosystems with associated ranges of sulfur isotopic signatures.
Figure 3 Illustration of different ecosystems with associated ranges of sulfur isotopic signatures.

Consumers yield isotopic signatures that reflect the sulfur reservoir(s) of the dietary protein source. Animal proteins sourced from marine ecosystems tend to have δ34S values between +16 and +17‰,[80][103][104] terrestrial plants range from -7‰ to +8‰, while proteins from freshwater and terrestrial ecosystems are highly variable.[101] The sulfate content of the modern ocean is well-mixed with a δ34S of approximately +21‰,[105] while riverine water is heavily influenced by sulfur-bearing minerals in surrounding bedrock and terrestrial plants are influenced by the sulfur content of local soils.[101][103] Estuarian ecosystems have increased complexity due to seawater and river inputs.[101][103] The extreme range of δ34S values for freshwater ecosystems often interferes with terrestrial signals, making it difficult to use the sulfur system as the sole tool in paleodiet studies.[101]

Various studies have analyzed the isotopic ratios of sulfur in mummified hair.[106][107][108] Hair is a good candidate for sulfur studies as it typically contains at least 5% elemental sulfur.[103] One study incorporated sulfur isotope ratios into their paleodietary investigation of four mummified child victims of Incan sacrificial practices.[109] δ34S values helped them conclude that the children had not been eating marine protein before their death. Historical insight coupled with consistent sulfur signatures for three of the children suggests that they were living in the same location 6 months prior to the sacrifice.[109] Studies have measured δ34S values of bone collagen, though the interpretation of these values was not reliable until quality criteria were published in 2009.[110] Though bone collagen is abundant in skeletal remains, less than 1% of the tissue is made of sulfur, making it imperative that these studies carefully assess the meaning of bone collagen δ34S values.[103]

DNA

[edit]

DNA analysis of past populations is used to genetically determine sex, determine genetic relatedness, understand marriage patterns, and investigate prehistoric migration.[111]

In 2012 archaeologists found skeletal remains of an adult male. He was buried under a car park in England. DNA evidence allowed the archaeologists to confirm that the remains belonged to Richard III, the former king of England who died in the Battle of Bosworth.[112]

In 2021, Canadian researchers analyzed skeletal remains found on King William Island, identifying them as belonging to Warrant Officer John Gregory, an engineer serving aboard HMS Erebus in the ill-fated 1845 Franklin Expedition. He was the first expedition member to be identified by DNA analysis.[113]

Biological distance analysis

[edit]

Biological distance analysis (also called biodistance analysis) is a method used to assess genetic relationships among past human individuals and groups in archaeological contexts by examining skeletal traits, particularly metric and nonmetric features of the skull and dentition. By quantifying biological similarities and differences, this approach provides insight into the population structure of ancient societies, including migration patterns, kinship, and post-marital residence. It is often employed when ancient DNA (aDNA) preservation is poor or when destructive sampling is not possible due to curatorial or ethical constraints. Although less precise than aDNA analysis, biodistance analysis remains a key tool in bioarchaeological research, complementing other molecular, isotopic, and material culture evidence.[114][115][116][117]

Biocultural bioarchaeology

[edit]

The study of human remains can illuminate the relationship between physical bodies and socio-cultural conditions and practices, via a biocultural bioarchaeology model.[118] Bioarchaeology is typically regarded as a positivist, science-based discipline, while the social sciences are regarded as constructivist. Bioarchaeology has been criticized for having little to no concern for culture or history. One scholar argued that scientific/forensic scholarship ignores cultural/historic factors. He proposed that a biocultural version of bioarchaeology offered a more meaningful, nuanced, and relevant picture, especially for descent populations.[119][120]

Biocultural bioarchaeology combines standard forensic techniques with investigations of demography and epidemiology in order to assess socioeconomic conditions experienced by human communities. For example, incorporation of analysis of grave goods can further the understanding of daily activities.

Some bioarchaeologists view the discipline as a crucial interface between the science and the humanities; as the human body is made and re-made by both biological and cultural factors.[121]

Another type of bioarchaeology focuses on quality of life, lifestyle, behavior, biological relatedness, and population history.[122][123][124] It does not closely link skeletal remains to their archaeological context, and may best be viewed as a "skeletal biology of the past".[125]

Inequalities exist in all human societies.[126] Bioarchaeology has helped to dispel the idea that life for foragers of the past was "nasty, brutish and short"; bioarchaeological studies reported that foragers of the past were often healthy, while agricultural societies tended to have increased incidence of malnutrition and disease.[127] One study compared foragers from Oakhurst to agriculturalists from K2 and Mapungubwe and reported that agriculturalists from K2 and Mapungubwe were not subject to the lower nutritional levels expected.[128]

Danforth argues that more "complex" state-level societies display greater health differences between elites and the rest of society, with elites having the advantage, and that this disparity increases as societies become more unequal. Some status differences in society do not necessarily mean radically different nutritional levels; Powell did not find evidence of great nutritional differences between elites and commoners, but did find lower rates of anemia among elites in Moundville.[129]

An area of increasing interest interested in understanding inequality is the study of violence.[130] Researchers analyzing traumatic injuries on human remains have shown that social status and gender can have a significant impact on exposure to violence.[131][132][133] Numerous researchers study violence in human remains, exploring violent behavior, including intimate partner violence,[134] child abuse,[135] institutional abuse,[136] torture,[137][138] warfare,[139][140] human sacrifice,[141][142] and structural violence.[143][144]

Ethics

[edit]

Ethical issues with bioarchaeology revolve around the treatment and respect for the dead.[4][145] Large-scale skeletal collections were first amassed in the US in the 19th century, largely the remains of Native Americans. No permission was granted by surviving family for study and display. Federal laws such as 1990's NAGPRA (Native American Graves Protection and Repatriation Act) allowed Native Americans to regain control over their ancestors' remains and associated artifacts.

Many archaeologists did not realize that many people perceive archaeologists as non-productive and/or grave robbers.[146] Concerns about mistreatment of remains are not unfounded: in a 1971 Minnesota excavation, White and Native American remains were treated differently; Whites were reburied, while Native Americans were moved to a natural history museum.[146] African American bioarchaeology grew after NAGPRA and its effect of ending the study of Native American remains.[119]

Bioarchaeology in Europe was not as disrupted by repatriation issues.[4] However, because much of European archaeology has been focused on classical roots, artifacts and art have been emphasized and Roman and post-Roman skeletal remains were nearly completely neglected until the 1980s. In prehistoric European archaeology, biological remains began to be analyzed earlier than in classical archaeology.

While ethical approaches to the excavation and analysis of physical human remains have received considerable attention, professional and academic dialogue regarding how to appropriately record, share, and display human remains in the digital realm is less developed. While digital technologies for recording and analysing human remains are increasingly accessible, justification for such recording and analysis is essential e.g. 3D scanning performed simply because it is possible is inappropriate and disrespectful to the deceased.[147][148]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia

Bioarchaeology is the study of human skeletal remains from archaeological contexts to interpret past , , diet, mobility, and through empirical of bones and associated materials. The field integrates methods from , such as and , with archaeological data to reconstruct lifeways, , and environmental interactions in prehistoric and historic societies. Coined by Jane E. Buikstra in the 1970s, bioarchaeology emphasizes regionally focused, interdisciplinary approaches to mortuary evidence, distinguishing it from broader paleopathology by prioritizing contextual synthesis over isolated pathology. Key achievements include documenting skeletal markers of violence, nutritional deficiencies, and migration patterns, yielding causal insights into human adaptation, social stress, and disease prevalence that challenge or refine historical narratives based on textual records alone. Techniques like stable isotope of teeth and bone reveal dietary shifts and , while complements morphological data to trace genetic continuity and admixture, though interpretations remain grounded in verifiable skeletal metrics to mitigate overreliance on probabilistic genomic models.

Definition and Historical Development

Origins and Terminology

The term "bioarchaeology" was first introduced in 1972 by British archaeologist Grahame Clark, who applied it to the study of ecological relationships evidenced by animal and human remains recovered from archaeological sites, emphasizing faunal analysis alongside human skeletal data. This initial usage reflected a broader interdisciplinary approach integrating archaeology with biological evidence to reconstruct past environments and subsistence patterns. In 1977, American biological anthropologist Jane Buikstra redefined the term specifically for the United States context, narrowing its scope to the contextual analysis of human skeletal remains from archaeological contexts using methods from , such as osteological examination integrated with cultural and environmental data. Buikstra's formulation emphasized reconstructing population-level biological profiles, including , , and activity patterns, while distinguishing it from purely descriptive bone studies. This redefinition marked the formal establishment of bioarchaeology as a distinct subfield, building on earlier paleopathological work but prioritizing holistic, population-based interpretations over individual case studies. Terminologically, bioarchaeology in the American tradition contrasts with European equivalents like osteoarchaeology or paleo-osteology, which often focus more narrowly on the of skeletal morphology, , and without the same emphasis on broader archaeological contextualization. Osteoarchaeology, prevalent in the UK and , typically encompasses detailed osteometric and aging techniques applied to assemblages, sometimes extending to forensic applications, but lacks the explicit framing of Buikstra's bioarchaeology. These distinctions arise from regional academic traditions: U.S. bioarchaeology aligns closely with four-field , integrating with cultural , whereas European osteoarchaeology operates more within standalone departments, reflecting differing institutional histories in handling remains. Despite overlaps, the terms are not fully interchangeable, as bioarchaeology mandates explicit linkage to archaeological evidence for interpreting past lifeways.

Evolution of the Discipline

The roots of bioarchaeology lie in early 20th-century physical anthropology, where systematic study of human skeletal remains focused on documenting biological variation through descriptive and typological methods. Aleš Hrdlička, as curator at the Smithsonian Institution from 1903, professionalized these efforts, founding the American Journal of Physical Anthropology in 1918 and training subsequent generations in osteological analysis. Such approaches, however, were often isolated from archaeological contexts and shaped by prevailing racial typologies, prioritizing cranial metrics over holistic biocultural interpretations. The discipline formalized in the 1970s, influenced by processual archaeology's demand for hypothesis-driven, scientific inquiry into past behavior. Jane Buikstra introduced the term "bioarchaeology" in her 1977 publication, which proposed integrating with archaeological data to examine , , and cultural processes, as demonstrated in her analysis of Illinois River Valley remains. This shift, echoed by scholars like George Armelagos, moved beyond individual pathology descriptions to regional, contextual reconstructions of lifestyle and environmental interactions. In parallel, British traditions developed under "human osteoarchaeology," emphasizing similar skeletal-archaeological linkages but with distinct terminological and institutional trajectories. Methodological expansions in the late and beyond marked further evolution, with biomechanical analyses enabling inferences of activity patterns and stable techniques, pioneered around 1977, facilitating dietary reconstructions. The 1980s introduced applications for kinship and migration studies, while standardized protocols, such as those in Buikstra and Ubelaker's 1994 guidelines for skeletal , enhanced comparability across assemblages. These advancements addressed critiques like the "osteological paradox," refining interpretations of health from incomplete skeletal samples and solidifying bioarchaeology's role in testing hypotheses about subsistence transitions and . By the early , the field increasingly incorporated ethical considerations in curation and global datasets, transitioning from supportive historical roles to proactive contributions on human agency across .

Key Figures and Milestones

Jane E. Buikstra formalized the discipline of bioarchaeology in 1977 through her paper "Biocultural Dimensions of Archaeological Study: A Regional Perspective," where she coined the term to describe the contextual analysis of human skeletal remains within archaeological settings, distinguishing it from general physical anthropology. As a pioneer, Buikstra advocated for integrating biological data on health, diet, and demography with cultural and environmental contexts, influencing standards for mortuary analysis and population reconstruction; she later edited the seminal volume Bioarchaeology: The Contextual Analysis of Human Remains (2006), which outlined methodological protocols for the field. Her work at Arizona State University established bioarchaeology as a rigorous, interdisciplinary subfield, earning her election to the National Academy of Sciences in 2015. Clark Spencer Larsen emerged as a leading contributor in the late , authoring Bioarchaeology: Interpreting Behavior from the (1997, with subsequent editions), which detailed techniques for inferring activity patterns, nutritional stress, and from skeletal markers like enthesopathies and trauma. Larsen's research on prehistoric and colonial populations, including missions, demonstrated how bioarchaeological data reveal impacts of subsistence shifts and colonialism on morbidity and stature. George J. Armelagos, a foundational figure in , advanced bioarchaeological insights into diet and infectious disease through pioneering and dental microwear analyses on Nubian skeletal series from the onward, linking enamel hypoplasias to agricultural intensification and nutritional deficiencies. His 2003 chapter "Bioarchaeology as Anthropology" emphasized the field's role in testing evolutionary hypotheses about human . Key milestones trace to the 1960s-1970s processual archaeology movement, which prioritized scientific quantification of human remains over descriptive typology, enabling population-level studies of health disparities. The 1980s saw expanded use of biomechanical modeling for activity reconstruction, while the 1990s integrated chemical analyses like stable isotopes, building on Armelagos's earlier work to trace migration and practices. By the 2000s, Buikstra's frameworks standardized data comparability across sites, facilitating global syntheses; recent advancements, such as applications since the 2010s, have complemented traditional osteological methods but underscore the enduring value of skeletal morphology for causal inferences on lifestyle and pathology.

Core Methodological Approaches

Demographic Reconstruction

Demographic reconstruction, or paleodemography, in bioarchaeology seeks to estimate past population parameters such as age-at-death distributions, sex ratios, fertility rates, and mortality patterns from human skeletal remains excavated in archaeological contexts. This subfield integrates osteological data with statistical models to infer life history events and , often constructing life tables or hazard models to quantify risks of death across age classes. Unlike reliant on written records, paleodemographic approaches must contend with incomplete and biased skeletal samples, necessitating rigorous validation against modern analogs where possible. Age estimation forms the cornerstone of demographic profiles, employing multiple non-destructive techniques to minimize subjectivity. For subadults, dental development—such as eruption sequences and root formation—provides precise indicators, with methods calibrated against radiographic standards yielding accuracy within 0.5–1 year up to age 12. In adults, remodeling (e.g., the six-phase Suchey-Brooks system, based on 739 individuals) and auricular surface changes offer modal age ranges of 10–15 years, though cranial suture closure and sternal rib-end metamorphosis serve as supplementary checks. Recent advances incorporate transition analysis, a probabilistic framework that accounts for individual variation by modeling age progression through skeletal phases, improving resolution in small samples. Sex determination complements age data to enable sex-specific mortality analyses, primarily through morphological assessment of the , where the sciatic notch shape and subpubic angle reliably distinguish males (narrower notch, acute angle) from females in 90–95% of well-preserved cases among adults over 18 years. Cranial metrics, such as mastoid process size and supraorbital robusticity, or discriminant functions from measurements, extend applicability to fragmented remains, though accuracy drops below 80% for subadults and juveniles due to sexual dimorphism's late ontogenetic onset. Derived metrics include stationary population models assuming constant size to calculate birth rates from age distributions, or bootstrapped resampling to estimate confidence intervals for (e.g., via age proxies from dental enamel defects). Bayesian frameworks address non-stationarity by incorporating prior growth rates, as in analyses of historic Japanese cemeteries yielding adult mortality peaks at 40–50 years. Preservation and recovery biases profoundly distort reconstructions, with subadult bones—thinner and more porous—disproportionately underrepresented, often comprising under 20% of assemblages despite comprising 40–50% of living populations, leading to inflated adult survivorship estimates. The exacerbates this, as robust individuals (less frail) are overrepresented in older age classes due to selective mortality, while pathologies may enhance and preservation, biasing health inferences. Hidden heterogeneity in frailty and migration further complicates uniformitarian assumptions, prompting calls for multi-cemetery syntheses and sensitivity testing to validate profiles against ethnographic data. Despite these hurdles, demographic reconstructions have illuminated trends like elevated from increased juvenile representation in European sites.

Health and Pathology Assessment

Bioarchaeologists assess and through macroscopic and microscopic examination of skeletal and dental remains to identify evidence of , trauma, nutritional deficiencies, and physiological stress. Common indicators include nonspecific markers such as linear (LEH), which records disruptions in formation due to stressors like or illness during childhood, typically appearing as horizontal grooves on incisors and canines. Porotic hyperostosis and cribra orbitalia, characterized by porous lesions on and orbital roofs respectively, often signal from dietary inadequacies, parasitic infections, or , though their etiology requires contextual corroboration to distinguish from postmortem damage. Periosteal reactions on long bones, manifesting as woven bone deposits, indicate nonspecific infectious or inflammatory processes, while localized —evidenced by formation and sequestra—points to suppurative bacterial infections. Trauma analysis involves documenting types, patterns, and wounds to infer , accidents, or occupational hazards; for instance, parry fractures on the suggest defensive injuries from interpersonal conflict. Degenerative conditions like , identified by and osteophytes on joint surfaces, reflect biomechanical wear from repetitive activities or aging, though prevalence varies by population and must account for age-at-death biases. Metabolic disorders are diagnosed via skeletal changes such as bowing deformities in or Harris lines in tibiae indicating temporary growth arrests. Dental pathology, including caries, abscesses, and antemortem , provides proxies for dietary carbohydrates and , with higher caries rates correlating to increased consumption in prehistoric . Congenital anomalies and neoplasms, rarer in assemblages, offer insights into genetic predispositions, but require histological confirmation to rule out pseudopathologies. Interpretation faces the osteological paradox, wherein skeletal lesions may reflect frailty in those dying from acute conditions without time for bone remodeling, or resilience in survivors who healed from chronic insults, leading to potential overestimation of population morbidity if selective mortality is ignored. Introduced by Wood et al. in , this framework emphasizes that cemetery samples bias toward healthier individuals who reached skeletal maturity, as infants and frail adults decompose or lack lesions, complicating cross-population health comparisons. Advances incorporate frailty indexing via hazard models and to parse lesion prevalence against age distributions, revealing, for example, elevated frailty in medieval European samples with high tuberculosis markers. remains challenging due to lesion nonspecificity and taphonomic alterations, necessitating integration with contextual data like burial goods or site ecology for robust causal inferences.

Activity and Biomechanical Analysis

Activity and biomechanical analysis in bioarchaeology examines skeletal morphology to reconstruct habitual behaviors, occupational patterns, and physical stresses experienced by past populations. This approach relies on the principle that bones adapt to mechanical loading through , whereby repeated stress remodels cortical bone thickness, cross-sectional geometry, and trabecular architecture to optimize strength and efficiency. Key indicators include musculoskeletal stress markers (MSMs) such as enthesopathies—modifications at muscle and insertion sites—and degenerative (DJD), which reflect repetitive strain rather than acute . Biomechanical methods quantify these adaptations using cross-sectional analysis of long bones, measuring properties like cortical area, second moment of inertia, and polar moment of inertia to model load magnitude, direction, and frequency. For instance, humeral robusticity indices differentiate throwing mechanics in prehistoric populations, with higher torsional strength correlating to projectile use. Femoral midshaft geometry assesses terrestrial mobility, where increased anteroposterior bending strength indicates habitual walking or running on uneven terrain, as seen in comparative studies of hunter-gatherer versus agricultural skeletons. Entheseal scoring systems, such as the Validated Entheses-Based Reconstruction of Activity (VERB 2.0), standardize observations of robusticity and discrete traits at sites like the deltoid tuberosity or Achilles tendon insertion to infer upper and lower limb activities. Applications often reveal divisions of labor, particularly sex-based differences preserved in bilateral asymmetry and site-specific markers. In European and samples, females exhibit greater upper limb enthesopathies linked to hide processing and food preparation, while males show markers of thrusting or motions. Prehispanic American Southwest assemblages demonstrate in humeral cross-sections, with males displaying enhanced torsional resistance consistent with agricultural tasks like maize grinding or tool use, challenging uniform subsistence models. These patterns align with ethnographic analogies but require caution, as and age influence marker expression independently of activity. Despite advancements, interpretations face the osteological : robust markers may indicate successful adaptation rather than frailty, and enthesopathies conflate mechanical overload with degenerative or inflammatory processes. Reproducibility studies highlight inter-observer error in MSM scoring, with only 60-70% agreement for discrete traits, underscoring the need for population-specific baselines over universal thresholds. Integration with contextual data, such as or settlement types, strengthens inferences, as isolated skeletal analysis risks overgeneralization. Ongoing refinements, including and finite element analysis, aim to enhance precision in load simulation.

Dietary and Subsistence Reconstruction

Bioarchaeologists reconstruct ancient diets and subsistence strategies primarily through chemical and morphological analyses of skeletal remains, focusing on bone collagen, , and dental surfaces to infer food sources, trophic positions, and economic adaptations. Stable isotope analysis of carbon (δ¹³C) and (δ¹⁵N) in bone collagen provides long-term dietary averages, with δ¹³C distinguishing between C₃ (e.g., , ; δ¹³C ≈ -28‰), C₄ (e.g., , millet; δ¹³C ≈ -12‰), and (δ¹³C ≈ -10 to -20‰ with elevated δ¹⁵N >10‰ indicating higher trophic levels). These signatures reflect lifetime protein intake, enabling differentiation between terrestrial herbivores, marine-dependent diets, and mixed economies, as seen in populations where δ¹³C values indicated variable C₄ crop reliance. Dental microwear texture analysis complements isotopes by revealing short-term dietary behaviors through microscopic scratches and pits on occlusal and buccal surfaces, where high pit densities suggest hard-object feeding (e.g., nuts, seeds) and elongated scratches indicate tough foods like meat or fibrous plants. In east-central Mississippian sites, integrated microwear and isotope data showed shifts from mixed C₃/C₄ plant diets with moderate animal protein to increased dependence, correlating with intensified agriculture. Buccal microwear specifically detects phytolith-induced abrasions from grasses, aiding identification of foraging or early farming. Subsistence patterns emerge from these proxies: hunter-gatherer groups often exhibit δ¹⁵N >8‰ from diverse protein sources and variable microwear from wild foods, while agriculturalists show depleted δ¹⁵N (4-7‰) from plant-heavy diets and enamel hypoplasias linked to weaning stresses or seasonal scarcity. In transitional Neolithic contexts, elevated δ¹³C in later skeletons signals adoption of C₄ crops, evidencing a shift from foraging to farming, though intra-population variation highlights social inequalities in resource access. Limitations include diagenetic alteration of isotopes, requiring collagen preservation checks (e.g., C:N ratios 2.9-3.6), and microwear's bias toward recent meals, necessitating multi-proxy integration for robust reconstructions. Sulfur isotopes (δ³⁴S) further refine coastal vs. inland subsistence by distinguishing marine (δ³⁴S >10‰) from terrestrial diets.

Advanced Analytical Techniques

Stable Isotope and Trace Element Studies

Stable isotope analysis in bioarchaeology examines the ratios of non-radioactive isotopes in bone , , and other tissues to infer dietary habits, residential mobility, and environmental exposures of past populations. This method relies on the principle that isotopic compositions of consumed food and water are incorporated into body tissues, with turnover times varying by tissue type— reflecting average diet over years, while enamel forms incrementally during childhood. Applications began in the 1970s, with foundational work by researchers like Michael DeNiro demonstrating carbon and nitrogen isotopes' utility for paleodietary reconstruction. Carbon (δ¹³C) and nitrogen (δ¹⁵N) isotopes in provide quantitative data on protein sources, distinguishing C₃ pathway (e.g., , δ¹³C ≈ -28‰) from C₄ (e.g., , δ¹³C ≈ -12‰) and detecting marine resource reliance through depleted δ¹³C and elevated δ¹⁵N values. δ¹⁵N enrichment occurs with each , typically increasing 3-5‰ per step, allowing assessment of consumption or duration. Oxygen (δ¹⁸O) and (⁸⁷Sr/⁸⁶Sr) isotopes in enamel track mobility; δ¹⁸O varies with ingested influenced by and altitude, while strontium ratios reflect local , enabling detection of non-local individuals if values mismatch burial site baselines. For instance, in Migration Period , strontium analysis identified up to 20-30% non-local s in some cemeteries, indicating population movements. Sulfur (δ³⁴S) isotopes supplement dietary and data, varying by marine (≈20‰) versus terrestrial (-20 to +15‰) ecosystems and aiding regional discrimination where baselines overlap. Trace elements like , , and lead in enamel or have been used historically for similar purposes, with elevated strontium/calcium ratios once interpreted as vegetable-heavy diets, but post-depositional often alters concentrations, reducing reliability without isotopic corroboration. Modern protocols emphasize pretreatment to mitigate contamination, such as extraction via acid for δ¹³C and δ¹⁵N, and for intra-tooth profiling to resolve life history events like (typically δ¹⁵N drop of 1-2‰ post-weaning). Challenges include baseline establishment from local fauna and plants, as human values require contextual calibration, and diagenetic overprinting, addressed by quality checks like collagen yield (>1%) and C:N ratios (2.9-3.6). Recent advancements integrate multi-isotope approaches with Bayesian mixing models for probabilistic diet estimates, enhancing precision over traditional point-source assumptions. In the Andes, combined carbon, nitrogen, and sulfur data from 500-1600 CE sites revealed dietary shifts tied to agricultural intensification and camelid herding. Trace element studies persist for social status inferences via lead exposure patterns, though ethical concerns arise in forensic applications without standardized validation.

Ancient DNA and Genetic Analysis

Ancient DNA (aDNA) analysis in bioarchaeology involves extracting and sequencing genetic material from preserved skeletal remains, such as bones and teeth, to reconstruct aspects of past , , and social structures that skeletal morphology alone cannot reveal. Typically fragmented into short lengths of 40–500 base pairs due to post-mortem degradation, aDNA is subject to chemical damage like , which produces characteristic patterns used for . Extraction protocols prioritize dense tissues like petrous or dental pulp, often involving decalcification and silica-based purification to maximize endogenous DNA yield while minimizing inhibitors. Key advancements in extraction and sequencing emerged in the early with the adoption of high-throughput next-generation sequencing (NGS), enabling the recovery of low-coverage genomes from samples over 10,000 years old. Methods such as those developed by Dabney et al. (2013) optimized for short fragments by using partial removal during decalcification, yielding up to 100-fold more DNA than prior techniques from Pleistocene remains. Single-stranded library preparation, introduced around , further improved recovery from damaged templates by ligating adapters to denatured strands, facilitating whole-genome sequencing at coverages as low as 0.1x for population-level inferences. These techniques have been refined for challenging samples, including 0.5 g of powder with enhanced to boost recovery rates in degraded contexts. In bioarchaeological applications, elucidates , revealing admixture events and migrations; for instance, genome-wide data from 102 Bronze Age Aegean individuals demonstrated gene flow from and the into around 2000 BCE. analysis identifies familial ties within clusters, as in Mesolithic where close relatives (up to second-degree) were interred together, informing . It also detects ancient pathogens, such as in mass graves dated to 3000 BCE, linking genetic evidence to outbreaks. determination via Y-chromosome markers or autosomal coverage complements osteological assessments, while biodistance metrics quantify genetic relatedness between groups, often validating or challenging archaeological narratives of continuity. Challenges persist due to degradation and contamination, with hydrolytic and oxidative processes fragmenting strands and introducing lesions that bias amplification toward undamaged regions. Modern human DNA contamination, potentially from excavators or curators, can exceed 10% in extracts and lead to erroneous admixture signals; mitigation relies on dedicated clean labs, UV irradiation of surfaces, and computational filters like PMDtools for damage profiling. Authentication demands replicate extractions, blank controls, and population-specific allele frequency checks, as exogenous microbial DNA often comprises over 90% of sequences in untreated samples. Despite these hurdles, shotgun metagenomics and targeted enrichment have expanded applicability, though ultra-low coverage (<0.01x) limits fine-scale kinship resolution without imputation.

Population Affinity and Biodistance

Population affinity estimation in bioarchaeology involves assigning archaeological skeletal remains to biologically related groups using morphological comparisons to modern or ancient reference samples, reflecting underlying genetic ancestry shaped by evolutionary processes such as , , and selection. This approach recognizes human biological variation as clinal yet permits probabilistic group assignments based on heritable skeletal traits, with reported accuracies exceeding 85% for cranial metrics in controlled forensic validations adaptable to archaeological contexts. Biodistance analysis complements this by quantifying inter-population biological distances—measures of phenotypic dissimilarity—to reconstruct , migration patterns, and admixture in past societies, assuming that skeletal morphology primarily tracks genetic relatedness when environmental plasticity is minimized through trait selection. Key methods rely on metric traits, such as craniometric dimensions (e.g., bizygomatic breadth, nasal aperture width), analyzed via like Mahalanobis D2D^2 , which accounts for trait correlations and sample variances to estimate . Non-metric traits, including discrete cranial features like suture or incisive , are evaluated using frequency-based divergences such as Smith's Measure of Divergence, offering robustness to incomplete skeletons common in archaeological assemblages. Dental morphology and postcranial metrics provide supplementary , with enamel traits prized for their high (up to 0.8) and resistance to postnatal modification. Recent integrations, such as biodistance networks, model regional kin structures by graphing pairwise distances, revealing microevolutionary patterns like isolation-by- in pre-Columbian or Mediterranean migrations. Applications in bioarchaeology illuminate , as seen in analyses of Central European sites where cranial biodistances indicated localized rather than mass replacement, challenging diffusionist models with evidence of gradual admixture rates under 5% per generation. In , dental non-metric distances between Maya and Zapotec samples quantified biological continuity amid cultural exchanges, supporting endogenous development over external invasion hypotheses. Such studies prioritize heritable traits to isolate genetic signals from ecogeographic influences, though validations against confirm morphological proxies capture only 60-70% of neutral genetic variance due to polygenic architecture. Limitations arise from assumptions of genetic in morphology, as —e.g., masticatory stress altering mandibular robusticity—can inflate distances by 10-20% in nutritionally stressed groups, necessitating corrections via ecogeographic controls. Small sample sizes in archaeological contexts (often n<30n < 30) amplify Type II errors in affinity assignments, with biodistance reliability dropping below 70% without from temporally proximate populations. Modern forensic-derived references, biased toward admixed urban samples, may misalign with ancient isolates, underscoring the need for site-specific baselines; nonetheless, biodistance remains a vital non-destructive tool where preservation fails, as in tropical environments.

Interpretative Frameworks and Applications

Biocultural Synthesis

Biocultural synthesis in bioarchaeology integrates from skeletal remains—such as indicators of nutritional stress, infectious , and —with archaeological, ethnographic, and environmental evidence to reconstruct how sociocultural systems causally influenced in past populations. This framework emphasizes political-economic processes, where social hierarchies, distribution, and labor manifest in differential outcomes, rather than attributing biological variation solely to genetic or random environmental factors. Pioneered in works like Goodman and Leatherman's edited volume, it critiques reductionist models by highlighting how structural inequalities amplify biological vulnerabilities, as evidenced by patterns in growth disruptions and skeletal pathologies across stratified societies. In practice, synthesis involves contextualizing osteological profiles with mortuary data and settlement patterns; for example, at Mississippian mound sites like Moundville (circa AD 1000–1550), lower-status individuals exhibit elevated frequencies of porotic hyperostosis and linear enamel hypoplasias—markers of and childhood malnutrition—correlating with peripheral grave locations and minimal , indicative of restricted resource access amid agricultural intensification and social ranking. Similarly, in South Asian Harappan populations (circa 2600–1900 BC), increased skeletal evidence of infectious lesions and trauma among non-elite groups aligns with archaeological signs of urban density and environmental degradation, suggesting that cultural shifts toward exacerbated health disparities through pathogen exposure and nutritional shortfalls. These integrations reveal causal pathways, such as how elite control over surplus production buffered against while burdening subordinates with labor-intensive tasks, yielding measurable biomechanical stresses like enthesopathies on lower limbs. Advancements in this approach incorporate stable isotope analyses of diet and mobility with cultural artifacts, as in studies of African diaspora skeletal series from 18th–19th century North American sites, where cribra orbitalia prevalence (up to 40% in subadults) intertwines with historical records of plantation economies, demonstrating how enforced labor and food rationing—rather than inherent biological frailties—drove chronic anemia and stunting. Such syntheses underscore adaptive strategies, like community buffering in egalitarian hunter-gatherer groups versus vulnerability in hierarchical ones, but require rigorous cross-validation against ethnohistoric analogies to mitigate interpretive overreach from incomplete assemblages. Empirical rigor demands prioritizing primary skeletal metrics over narrative imposition, acknowledging that academia's institutional biases may inflate egalitarian interpretations of pre-state societies.

Case Studies in Past Populations

Bioarchaeological investigations of the Neolithic site of in provide evidence of health transitions associated with early farming practices between approximately 7400 and 6000 BCE. Skeletal analyses of over 300 individuals reveal increased frequencies of linear enamel hypoplasias, cribra orbitalia, and compared to pre-agricultural populations, indicating nutritional deficiencies and infectious disease burdens linked to reliance on carbohydrate-rich crops like and . Mobility patterns, inferred from entheseal changes and strontium isotope ratios in , show reduced long-distance and greater , correlating with elevated osteoporotic fractures and degeneration from repetitive agricultural labor. These findings a net decline in overall health despite population growth, challenging narratives of uniform agricultural benefits. The East Smithfield cemetery in , an emergency burial ground established in 1348 during the outbreak, offers a key dataset for studying epidemic impacts on medieval . Excavations in the 1980s recovered approximately 2,400 individuals, with bioarchaeological assessments focusing on age-at-death distributions and skeletal indicators of frailty such as vertebral degeneration and healed fractures. Hazard models applied to 466 adults demonstrate that the plague disproportionately affected frailer individuals, with mortality risks elevated by 50-100% for those over 50 years old exhibiting pre-existing osteological stress markers. Dental wear and stature data further indicate chronic in the pre-epidemic , amplifying vulnerability to . This case highlights how bioarchaeology quantifies selective mortality in acute crises, revealing underlying health disparities. In (circa 43-410 CE), strontium, oxygen, and carbon analyses of skeletal remains from sites like Dorset and differentiate migrants from locals, illuminating population movements and health differentials. Of 226 individuals examined across multiple cemeteries, about 20-30% showed non-local isotopic signatures consistent with origins in the Mediterranean or , evidenced by higher δ¹⁸O values from enamel carbonates. These migrants exhibited 15-25% higher prevalences of cribra orbitalia, periosteal reactions, and metabolic disorders like compared to indigenous Britons, likely due to urban crowding, dietary shifts to imported grains, and exposure to novel pathogens. Biodistance metrics from cranial non-metric traits confirm , suggesting migrants integrated but faced elevated morbidity, transforming local disease landscapes without necessarily higher mortality rates. Southern Andean populations during the Late Intermediate Period (AD 1270-1420) demonstrate migration dynamics through multi-isotope studies of mummified remains from sites like Azapa Valley. Carbon and nitrogen ratios in indicate a shift from C3-based highland diets (e.g., , potatoes) to C4 supplementation among 40-50% of sampled individuals, while strontium isotopes reveal 25% non-local signatures matching origins. Accompanying osteological evidence includes increased treponemal lesions and trauma frequencies, attributed to conflict-driven displacements preceding Inka expansion. This integration of isotopes with reconstructs causal chains of environmental stress, resource competition, and population admixture.

Limitations and the Osteological Paradox

Bioarchaeological analyses are constrained by taphonomic processes that differentially preserve skeletal elements, with more robust bones like long bones surviving better than fragile ones such as crania or vertebrae, leading to incomplete datasets that underestimate certain pathologies. Selective burial practices further bias samples, as cemeteries often represent only specific social strata or exclude marginalized groups, such as infants or the impoverished, resulting in non-representative demographic profiles. Missing data from fragmentation or poor excavation recovery reduces statistical power, limiting comparisons across populations or time periods. A fundamental interpretative challenge is the Osteological Paradox, first formalized by Wood et al. in , which underscores the difficulty of reconstructing prehistoric health from skeletal markers due to selective mortality and demographic non-stationarity. Skeletal lesions, such as porotic hyperostosis or periosteal reactions, require time to develop and thus appear primarily in individuals who survived acute illnesses or stressors long enough to reach adulthood or older ages, while those succumbing rapidly—often the frailest—leave no such evidence. This bias implies that elevated pathology frequencies may signal either a with high frailty (many vulnerable individuals enduring chronically) or one with resilience (survivors adapting to stressors), complicating unidirectional inferences about overall health decline or improvement. The extends to age-at-death distributions, as archaeological samples often overrepresent young adults who experienced cumulative stress but underrepresent subadults dying from infectious outbreaks, whose absence masks impacts. Bioarchaeologists have responded by developing frailty models, such as of catch-up growth in subadult remains or osteological indices weighting by age and , to quantify selective pressures and distinguish individual resilience from population-level morbidity. Despite these advances, the persists as a caution against over-relying on raw prevalences without contextualizing them against demographic and ecological variables.

Controversies and Ethical Dimensions

Scientific Interpretation Biases

Cognitive biases, particularly confirmation bias, significantly influence the interpretation of skeletal traits in bioarchaeology and related fields like forensic anthropology. Experimental studies have demonstrated that providing contextual information, such as suggested sex of remains, alters assessments: in one study with 41 experienced analysts, a control group classified 31% of remains as male, while a group primed with male context classified 72% as male, and a female-context group classified 0% as male. Similar shifts occurred in ancestry and age-at-death estimates, indicating that prior expectations or external cues can skew non-metric visual assessments, which are foundational to bioarchaeological profiling of past individuals. Theoretical frameworks introduce further interpretive biases, often embedding unexamined assumptions about past societies. For instance, expectations of normative sex-related loss have led to gender-ideological biases, where bioarchaeologists modern patterns of onto prehistoric populations without sufficient evidence, potentially misinterpreting skeletal fragility as universal rather than context-specific. In facial reconstructions from skeletal remains, manifests through the imposition of contemporary population traits or expressions, reinforcing preconceptions about ancient identities despite limited evidentiary support; this is exacerbated by public or curatorial demands for "realistic" depictions that prioritize appeal over fidelity. Sample representativeness poses systemic challenges to interpretation, as archaeological skeletal collections are prone to multiple biases from differential preservation, recovery selectivity, and demographic skewing. These factors can distort population-level inferences, such as health status or growth patterns, where frailer individuals (e.g., those with stunted diaphyseal growth) are overrepresented in juvenile samples due to higher mortality among compromised children—evident in comparisons of accidental vs. natural death cohorts showing up to 2.15 z-score growth deficits in non-survivors. Bioarchaeologists must therefore rigorously model these biases, as unaddressed they lead to erroneous generalizations about past biocultural conditions, underscoring the need for methodological transparency and cross-validation against multiple data streams.

Ethical Challenges in Research and Curation

Bioarchaeological research inherently involves skeletal remains, presenting ethical challenges rooted in the absence of from the deceased and the sacred status of ancestors in many cultures. Historical collection practices, frequently conducted without input, fostered perceptions of exploitation and "grave-robbing," underscoring the need for contemporary protocols that emphasize respect, transparency, and with descendant groups. These protocols aim to mitigate harm while pursuing empirical insights into past human conditions, such as disease patterns and mobility, which inform causal understandings of biological and social histories. In research design, excavation and analysis demand careful weighing of scientific value against cultural sensitivities; for instance, destructive techniques like bone sampling for stable isotope or analysis irreversibly alter remains, requiring explicit justification, approvals where applicable, and consultation with affected communities to avoid unilateral decisions. Non-invasive methods, such as radiographic imaging, are prioritized when feasible to preserve integrity, though empirical evidence from projects like skeletal studies demonstrates that ethically vetted invasive research can yield verifiable data on impacts without inherent disrespect when descendant perspectives are integrated. Failure to address these issues risks perpetuating power imbalances, particularly in postcolonial contexts where indigenous groups have historically been marginalized in interpretive control. Curation poses ongoing dilemmas in storage, access, and maintenance; inadequate facilities lead to deterioration—estimated to affect up to 40% of U.S. collections due to poor control and shortages—forcing curators to balance preservation with ethical access for verified researchers while restricting public or commercial exploitation. "Orphaned" or legacy collections, often from 19th- and early 20th-century digs lacking , represent underutilized resources that bioarchaeologists have an ethical duty to prioritize for analysis over new excavations, thereby minimizing grave disturbances and maximizing data from existing materials without additional ethical costs. Publication practices reveal systemic gaps, with a 2022 analysis of 939 papers in major journals (American Journal of Physical Anthropology, International Journal of Osteoarchaeology, and International Journal of Paleopathology) from 2016 to 2021 finding only 3.7% included ethics statements, often buried in methods sections without standardization. This opacity can amplify public mistrust, as seen in high-profile cases like (1996 discovery), where disputes over analysis delayed repatriation and highlighted clashes between scientific inquiry and tribal sovereignty. Recommendations include mandatory ethics disclosures covering permissions, sampling rationales, and community engagements to enhance credibility and accountability. Repatriation intensifies these challenges, exemplified by the U.S. Native American Graves Protection and Repatriation Act (NAGPRA) of , which requires federal agencies and museums to return culturally affiliated Native American remains and objects; by , approximately 67,000 individuals had been repatriated, yet over 127,000 remained in holdings amid incomplete inventories and contested affiliations. Compliance lags—driven by resource constraints and interpretive disputes—have prompted lawsuits, such as the 2023 ProPublica-reported delays at major institutions, balancing verifiable cultural healing against potential losses in bioarchaeological data on pre-contact health disparities. Truth-seeking curation thus demands causal assessment: while addresses historical injustices, uncurated returns risk destroying empirical records of past stressors like nutritional deficits, necessitating collaborative models where tribes co-direct to retain knowledge benefits. Emerging issues, including digital bioarchaeological from scans and , extend ethical responsibilities to and perpetual access, urging protocols that prevent unauthorized reuse while enabling descendant oversight. Overall, ethical rigor in bioarchaeology hinges on first-principles evaluation—prioritizing verifiable evidence from remains while institutionally countering biases toward over-repatriation that could obscure causal histories of .

Repatriation and Cultural Heritage Debates

Repatriation efforts in bioarchaeology center on the return of ancestral human remains and associated artifacts to descendant communities, often pitting scientific preservation against cultural claims of ownership and reburial rights. In the United States, the Native American Graves Protection and Act (NAGPRA), enacted on November 16, 1990, mandates that federally funded institutions inventory and repatriate Native American human remains, funerary objects, sacred items, and cultural patrimony to lineal descendants or culturally affiliated tribes upon request. This law emerged from decades of contention over the excavation and curation of remains collected during 19th- and 20th-century archaeological work, which frequently involved unconsented disinterment from gravesites. Bioarchaeologists have raised concerns that such repatriations, frequently followed by reburial, preclude comprehensive analyses like stable isotope studies or extraction, which could yield data on prehistoric migration, diet, and pathology essential for reconstructing human history. Critics, including archaeologist Clement Meighan, argue that NAGPRA prioritizes unsubstantiated cultural affiliation claims—often rooted in oral traditions rather than empirical genetic or morphological evidence—over the universal scientific value of remains, potentially erasing irreplaceable evidence of pre-Columbian . The case of Kennewick Man, a 9,000-year-old skeleton discovered in Washington state on July 28, 1996, exemplifies these tensions. Initially claimed by five tribes under NAGPRA for repatriation and reburial, the remains sparked litigation when eight scientists sued the U.S. Army Corps of Engineers, arguing the law's cultural affiliation requirement was not met due to morphological features suggesting possible non-Native ancestry. A 2004 federal court ruling favored study access, citing insufficient evidence linking the individual to modern tribes under NAGPRA's criteria, allowing limited analyses that revealed no direct European ties but affinities to Polynesians and Ainu. Subsequent 2015 genomic sequencing, however, indicated closer relations to Native American groups, prompting the Department of the Interior to reverse course and repatriate the remains on February 17, 2017, to a coalition of tribes for burial at an undisclosed site. This outcome highlighted how advancing technologies like aDNA can shift debates but also how policy interpretations may override initial judicial findings, limiting further bioarchaeological inquiry into Paleoamerican diversity. Broader cultural heritage debates extend beyond NAGPRA to ethical frameworks for curation and display. Proponents of emphasize rectifying historical injustices, such as the estimated 200,000 Native American remains held in U.S. institutions as of the early , many acquired through or unethical collecting. Yet, as of January 2023, over remains remained unrepatriated despite NAGPRA's requirements, with noncompliance attributed to vague affiliation standards and institutional delays. Bioarchaeologists counter that reburial destroys potential; for instance, non-destructive or sampling protocols could preserve scientific access, but tribal preferences often preclude this. Internationally, analogous conflicts arise, as in European museums holding colonial-era remains from and , where guidelines advocate consultation but lack enforcement, fueling arguments that in academia—evident in peer-reviewed calls for decolonizing collections—undermines objective scholarship by subordinating evidence-based reconstruction to subjective descendant narratives. Recent 2023 NAGPRA amendments impose stricter timelines and penalties for noncompliance, yet they do not resolve core scientific critiques, such as the law's deference to religious beliefs over empirical validation. These debates underscore a causal tension: while addresses proximate ethical harms from past , it risks long-term epistemic losses in understanding biocultural .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.